May 8 00:23:07.831168 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:23:07.831187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:23:07.831197 kernel: BIOS-provided physical RAM map: May 8 00:23:07.831202 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:23:07.831207 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:23:07.831212 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:23:07.831217 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 8 00:23:07.831222 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 8 00:23:07.831228 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:23:07.831233 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:23:07.831238 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:23:07.831243 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:23:07.831248 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:23:07.831253 kernel: NX (Execute Disable) protection: active May 8 00:23:07.831260 kernel: APIC: Static calls initialized May 8 00:23:07.831266 kernel: SMBIOS 3.0.0 present. May 8 00:23:07.831271 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 8 00:23:07.831276 kernel: Hypervisor detected: KVM May 8 00:23:07.831281 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:23:07.831287 kernel: kvm-clock: using sched offset of 3047574194 cycles May 8 00:23:07.831292 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:23:07.831298 kernel: tsc: Detected 2445.406 MHz processor May 8 00:23:07.831303 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:23:07.831309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:23:07.831316 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 8 00:23:07.831321 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:23:07.831326 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:23:07.831332 kernel: Using GB pages for direct mapping May 8 00:23:07.831337 kernel: ACPI: Early table checksum verification disabled May 8 00:23:07.831342 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 8 00:23:07.831348 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831363 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831369 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831376 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 8 00:23:07.831381 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831386 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831392 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831397 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:23:07.831402 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] May 8 00:23:07.831408 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] May 8 00:23:07.831417 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 8 00:23:07.831422 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] May 8 00:23:07.831428 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] May 8 00:23:07.831434 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] May 8 00:23:07.831439 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] May 8 00:23:07.831445 kernel: No NUMA configuration found May 8 00:23:07.831450 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 8 00:23:07.831457 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 8 00:23:07.831463 kernel: Zone ranges: May 8 00:23:07.831469 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:23:07.831474 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 8 00:23:07.831480 kernel: Normal empty May 8 00:23:07.831485 kernel: Movable zone start for each node May 8 00:23:07.831491 kernel: Early memory node ranges May 8 00:23:07.831496 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:23:07.831502 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 8 00:23:07.831509 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 8 00:23:07.831514 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:23:07.831520 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:23:07.831525 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 8 00:23:07.831531 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:23:07.831536 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:23:07.831542 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:23:07.831547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:23:07.831553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:23:07.831560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:23:07.831565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:23:07.831571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:23:07.831576 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:23:07.831582 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:23:07.831587 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:23:07.831593 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:23:07.831598 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:23:07.831604 kernel: Booting paravirtualized kernel on KVM May 8 00:23:07.831611 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:23:07.831617 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:23:07.831622 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:23:07.831628 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:23:07.831633 kernel: pcpu-alloc: [0] 0 1 May 8 00:23:07.831639 kernel: kvm-guest: PV spinlocks disabled, no host support May 8 00:23:07.831645 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:23:07.831651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:23:07.831658 kernel: random: crng init done May 8 00:23:07.831664 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:23:07.831670 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:23:07.831675 kernel: Fallback order for Node 0: 0 May 8 00:23:07.831681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 8 00:23:07.831686 kernel: Policy zone: DMA32 May 8 00:23:07.831692 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:23:07.831697 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 127200K reserved, 0K cma-reserved) May 8 00:23:07.831703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:23:07.831710 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:23:07.831716 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:23:07.831721 kernel: Dynamic Preempt: voluntary May 8 00:23:07.831727 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:23:07.831737 kernel: rcu: RCU event tracing is enabled. May 8 00:23:07.831743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:23:07.831748 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:23:07.831754 kernel: Rude variant of Tasks RCU enabled. May 8 00:23:07.831760 kernel: Tracing variant of Tasks RCU enabled. May 8 00:23:07.831765 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:23:07.831772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:23:07.831778 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:23:07.831784 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:23:07.831789 kernel: Console: colour VGA+ 80x25 May 8 00:23:07.831795 kernel: printk: console [tty0] enabled May 8 00:23:07.831800 kernel: printk: console [ttyS0] enabled May 8 00:23:07.831806 kernel: ACPI: Core revision 20230628 May 8 00:23:07.831811 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:23:07.831817 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:23:07.831824 kernel: x2apic enabled May 8 00:23:07.831830 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:23:07.831835 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:23:07.831841 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:23:07.831846 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) May 8 00:23:07.831852 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:23:07.831858 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:23:07.831863 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:23:07.831875 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:23:07.831881 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:23:07.831887 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:23:07.831894 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:23:07.831900 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:23:07.831906 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:23:07.831911 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:23:07.831917 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:23:07.831923 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:23:07.831931 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:23:07.831937 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:23:07.831942 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:23:07.831948 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:23:07.831954 kernel: Freeing SMP alternatives memory: 32K May 8 00:23:07.831960 kernel: pid_max: default: 32768 minimum: 301 May 8 00:23:07.831966 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:23:07.831972 kernel: landlock: Up and running. May 8 00:23:07.831979 kernel: SELinux: Initializing. May 8 00:23:07.831985 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:23:07.831991 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:23:07.831997 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:23:07.832003 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:23:07.832009 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:23:07.832015 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:23:07.832021 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:23:07.832026 kernel: ... version: 0 May 8 00:23:07.832033 kernel: ... bit width: 48 May 8 00:23:07.832039 kernel: ... generic registers: 6 May 8 00:23:07.832045 kernel: ... value mask: 0000ffffffffffff May 8 00:23:07.832051 kernel: ... max period: 00007fffffffffff May 8 00:23:07.832057 kernel: ... fixed-purpose events: 0 May 8 00:23:07.832063 kernel: ... event mask: 000000000000003f May 8 00:23:07.832068 kernel: signal: max sigframe size: 1776 May 8 00:23:07.832074 kernel: rcu: Hierarchical SRCU implementation. May 8 00:23:07.832080 kernel: rcu: Max phase no-delay instances is 400. May 8 00:23:07.832087 kernel: smp: Bringing up secondary CPUs ... May 8 00:23:07.832093 kernel: smpboot: x86: Booting SMP configuration: May 8 00:23:07.832124 kernel: .... node #0, CPUs: #1 May 8 00:23:07.832130 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:23:07.832136 kernel: smpboot: Max logical packages: 1 May 8 00:23:07.832142 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) May 8 00:23:07.832148 kernel: devtmpfs: initialized May 8 00:23:07.832153 kernel: x86/mm: Memory block size: 128MB May 8 00:23:07.832159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:23:07.832168 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:23:07.832174 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:23:07.832180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:23:07.832185 kernel: audit: initializing netlink subsys (disabled) May 8 00:23:07.832191 kernel: audit: type=2000 audit(1746663787.492:1): state=initialized audit_enabled=0 res=1 May 8 00:23:07.832197 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:23:07.832203 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:23:07.832209 kernel: cpuidle: using governor menu May 8 00:23:07.832215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:23:07.832222 kernel: dca service started, version 1.12.1 May 8 00:23:07.832228 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:23:07.832234 kernel: PCI: Using configuration type 1 for base access May 8 00:23:07.832240 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:23:07.832245 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:23:07.832251 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:23:07.832257 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:23:07.832263 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:23:07.832269 kernel: ACPI: Added _OSI(Module Device) May 8 00:23:07.832276 kernel: ACPI: Added _OSI(Processor Device) May 8 00:23:07.832282 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:23:07.832288 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:23:07.832293 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:23:07.832299 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:23:07.832305 kernel: ACPI: Interpreter enabled May 8 00:23:07.832311 kernel: ACPI: PM: (supports S0 S5) May 8 00:23:07.832317 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:23:07.832323 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:23:07.832330 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:23:07.832336 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:23:07.832342 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:23:07.832463 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:23:07.832538 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:23:07.832609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:23:07.832618 kernel: PCI host bridge to bus 0000:00 May 8 00:23:07.832688 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:23:07.832752 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:23:07.832810 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:23:07.832866 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 8 00:23:07.832924 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:23:07.832981 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 8 00:23:07.833037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:23:07.835248 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:23:07.835342 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 8 00:23:07.835427 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 8 00:23:07.835495 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 8 00:23:07.835560 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 8 00:23:07.835624 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 8 00:23:07.835688 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:23:07.835766 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.835833 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 8 00:23:07.835904 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.835969 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 8 00:23:07.836040 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.836122 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 8 00:23:07.836202 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.836268 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 8 00:23:07.836339 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.836417 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 8 00:23:07.836486 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.836551 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 8 00:23:07.836624 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.836688 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 8 00:23:07.839221 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.839300 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 8 00:23:07.839387 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 8 00:23:07.839454 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 8 00:23:07.839530 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:23:07.839596 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:23:07.839666 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:23:07.839730 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 8 00:23:07.839791 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 8 00:23:07.839860 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:23:07.839928 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:23:07.840004 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 8 00:23:07.840072 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 8 00:23:07.840173 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 8 00:23:07.840242 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 8 00:23:07.840306 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 8 00:23:07.840380 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 8 00:23:07.840443 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 8 00:23:07.840522 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 8 00:23:07.840589 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 8 00:23:07.840654 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 8 00:23:07.840719 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 8 00:23:07.840781 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 8 00:23:07.840853 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 8 00:23:07.840925 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 8 00:23:07.840991 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 8 00:23:07.841056 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 8 00:23:07.843709 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 8 00:23:07.843785 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 8 00:23:07.843862 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 8 00:23:07.843933 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 8 00:23:07.844005 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 8 00:23:07.844068 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 8 00:23:07.844198 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 8 00:23:07.844280 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 8 00:23:07.844347 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 8 00:23:07.844429 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 8 00:23:07.844494 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 8 00:23:07.844563 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 8 00:23:07.844626 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 8 00:23:07.844704 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 8 00:23:07.844773 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 8 00:23:07.844838 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 8 00:23:07.844903 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 8 00:23:07.844967 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 8 00:23:07.845032 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 8 00:23:07.845044 kernel: acpiphp: Slot [0] registered May 8 00:23:07.846701 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 8 00:23:07.846782 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 8 00:23:07.846851 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 8 00:23:07.846916 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 8 00:23:07.846980 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 8 00:23:07.847043 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 8 00:23:07.847145 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 8 00:23:07.847157 kernel: acpiphp: Slot [0-2] registered May 8 00:23:07.847225 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 8 00:23:07.847291 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 8 00:23:07.847368 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 8 00:23:07.847378 kernel: acpiphp: Slot [0-3] registered May 8 00:23:07.847443 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 8 00:23:07.847506 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 8 00:23:07.847568 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 8 00:23:07.847580 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:23:07.847587 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:23:07.847593 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:23:07.847599 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:23:07.847605 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:23:07.847611 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:23:07.847617 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:23:07.847623 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:23:07.847630 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:23:07.847636 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:23:07.847642 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:23:07.847648 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:23:07.847654 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:23:07.847660 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:23:07.847666 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:23:07.847672 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:23:07.847678 kernel: iommu: Default domain type: Translated May 8 00:23:07.847686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:23:07.847692 kernel: PCI: Using ACPI for IRQ routing May 8 00:23:07.847698 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:23:07.847704 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:23:07.847711 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 8 00:23:07.847775 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:23:07.847838 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:23:07.847904 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:23:07.847913 kernel: vgaarb: loaded May 8 00:23:07.847922 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:23:07.847928 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:23:07.847934 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:23:07.847940 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:23:07.847947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:23:07.847953 kernel: pnp: PnP ACPI init May 8 00:23:07.848021 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:23:07.848032 kernel: pnp: PnP ACPI: found 5 devices May 8 00:23:07.848041 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:23:07.848047 kernel: NET: Registered PF_INET protocol family May 8 00:23:07.848053 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:23:07.848059 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:23:07.848065 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:23:07.848071 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:23:07.848077 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:23:07.848084 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:23:07.848090 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:23:07.849131 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:23:07.849141 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:23:07.849148 kernel: NET: Registered PF_XDP protocol family May 8 00:23:07.849231 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:23:07.849301 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:23:07.849386 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:23:07.849452 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 8 00:23:07.849521 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 8 00:23:07.849585 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 8 00:23:07.849648 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 8 00:23:07.849711 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 8 00:23:07.849774 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 8 00:23:07.849838 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 8 00:23:07.849901 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 8 00:23:07.849964 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 8 00:23:07.850026 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 8 00:23:07.850093 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 8 00:23:07.854214 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 8 00:23:07.854286 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 8 00:23:07.854364 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 8 00:23:07.854431 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 8 00:23:07.854496 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 8 00:23:07.854565 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 8 00:23:07.854642 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 8 00:23:07.854710 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 8 00:23:07.854773 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 8 00:23:07.854836 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 8 00:23:07.854901 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 8 00:23:07.854964 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 8 00:23:07.855027 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 8 00:23:07.855090 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 8 00:23:07.855223 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 8 00:23:07.855290 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 8 00:23:07.855371 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 8 00:23:07.855438 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 8 00:23:07.855500 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 8 00:23:07.855568 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 8 00:23:07.855632 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 8 00:23:07.855698 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 8 00:23:07.855760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:23:07.855817 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:23:07.855877 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:23:07.855934 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 8 00:23:07.855993 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:23:07.856049 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 8 00:23:07.859154 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 8 00:23:07.859230 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 8 00:23:07.859301 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 8 00:23:07.859376 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 8 00:23:07.859451 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 8 00:23:07.859511 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 8 00:23:07.859576 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 8 00:23:07.859635 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 8 00:23:07.859702 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 8 00:23:07.859762 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 8 00:23:07.859830 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 8 00:23:07.859889 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 8 00:23:07.859952 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 8 00:23:07.860011 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 8 00:23:07.860069 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 8 00:23:07.860151 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 8 00:23:07.860214 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 8 00:23:07.860277 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 8 00:23:07.860341 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 8 00:23:07.860414 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 8 00:23:07.860472 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 8 00:23:07.860482 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:23:07.860489 kernel: PCI: CLS 0 bytes, default 64 May 8 00:23:07.860496 kernel: Initialise system trusted keyrings May 8 00:23:07.860502 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:23:07.860511 kernel: Key type asymmetric registered May 8 00:23:07.860518 kernel: Asymmetric key parser 'x509' registered May 8 00:23:07.860524 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:23:07.860530 kernel: io scheduler mq-deadline registered May 8 00:23:07.860536 kernel: io scheduler kyber registered May 8 00:23:07.860543 kernel: io scheduler bfq registered May 8 00:23:07.860610 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 8 00:23:07.860677 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 8 00:23:07.860742 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 8 00:23:07.860811 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 8 00:23:07.860877 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 8 00:23:07.860949 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 8 00:23:07.861014 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 8 00:23:07.861080 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 8 00:23:07.862230 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 8 00:23:07.862303 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 8 00:23:07.862385 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 8 00:23:07.862458 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 8 00:23:07.862522 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 8 00:23:07.862586 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 8 00:23:07.862651 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 8 00:23:07.862715 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 8 00:23:07.862725 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:23:07.862787 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 8 00:23:07.862851 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 8 00:23:07.862863 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:23:07.862870 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 8 00:23:07.862877 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:23:07.862883 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:23:07.862890 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:23:07.862897 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:23:07.862903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:23:07.862909 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:23:07.862977 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:23:07.863041 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:23:07.864122 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:23:07 UTC (1746663787) May 8 00:23:07.864197 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:23:07.864211 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:23:07.864219 kernel: NET: Registered PF_INET6 protocol family May 8 00:23:07.864225 kernel: Segment Routing with IPv6 May 8 00:23:07.864232 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:23:07.864239 kernel: NET: Registered PF_PACKET protocol family May 8 00:23:07.864247 kernel: Key type dns_resolver registered May 8 00:23:07.864254 kernel: IPI shorthand broadcast: enabled May 8 00:23:07.864260 kernel: sched_clock: Marking stable (1044020893, 136266053)->(1188651833, -8364887) May 8 00:23:07.864267 kernel: registered taskstats version 1 May 8 00:23:07.864273 kernel: Loading compiled-in X.509 certificates May 8 00:23:07.864280 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:23:07.864286 kernel: Key type .fscrypt registered May 8 00:23:07.864292 kernel: Key type fscrypt-provisioning registered May 8 00:23:07.864299 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:23:07.864307 kernel: ima: Allocated hash algorithm: sha1 May 8 00:23:07.864313 kernel: ima: No architecture policies found May 8 00:23:07.864321 kernel: clk: Disabling unused clocks May 8 00:23:07.864328 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:23:07.864334 kernel: Write protecting the kernel read-only data: 38912k May 8 00:23:07.864341 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:23:07.864347 kernel: Run /init as init process May 8 00:23:07.864364 kernel: with arguments: May 8 00:23:07.864373 kernel: /init May 8 00:23:07.864379 kernel: with environment: May 8 00:23:07.864385 kernel: HOME=/ May 8 00:23:07.864392 kernel: TERM=linux May 8 00:23:07.864398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:23:07.864406 systemd[1]: Successfully made /usr/ read-only. May 8 00:23:07.864415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:23:07.864423 systemd[1]: Detected virtualization kvm. May 8 00:23:07.864431 systemd[1]: Detected architecture x86-64. May 8 00:23:07.864437 systemd[1]: Running in initrd. May 8 00:23:07.864444 systemd[1]: No hostname configured, using default hostname. May 8 00:23:07.864450 systemd[1]: Hostname set to . May 8 00:23:07.864457 systemd[1]: Initializing machine ID from VM UUID. May 8 00:23:07.864464 systemd[1]: Queued start job for default target initrd.target. May 8 00:23:07.864470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:23:07.864477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:23:07.864487 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:23:07.864494 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:23:07.864501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:23:07.864509 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:23:07.864516 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:23:07.864523 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:23:07.864530 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:23:07.864538 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:23:07.864545 systemd[1]: Reached target paths.target - Path Units. May 8 00:23:07.864552 systemd[1]: Reached target slices.target - Slice Units. May 8 00:23:07.864559 systemd[1]: Reached target swap.target - Swaps. May 8 00:23:07.864565 systemd[1]: Reached target timers.target - Timer Units. May 8 00:23:07.864572 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:23:07.864579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:23:07.864586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:23:07.864593 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:23:07.864601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:23:07.864608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:23:07.864614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:23:07.864621 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:23:07.864628 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:23:07.864635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:23:07.864642 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:23:07.864648 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:23:07.864657 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:23:07.864664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:23:07.864689 systemd-journald[188]: Collecting audit messages is disabled. May 8 00:23:07.864708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:23:07.864717 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:23:07.864724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:23:07.864731 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:23:07.864738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:23:07.864745 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:23:07.864754 systemd-journald[188]: Journal started May 8 00:23:07.864773 systemd-journald[188]: Runtime Journal (/run/log/journal/6b05becd1a8c4ca5ac30a81ef762042d) is 4.8M, max 38.3M, 33.5M free. May 8 00:23:07.852414 systemd-modules-load[189]: Inserted module 'overlay' May 8 00:23:07.891748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:23:07.891769 kernel: Bridge firewalling registered May 8 00:23:07.891784 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:23:07.878774 systemd-modules-load[189]: Inserted module 'br_netfilter' May 8 00:23:07.892901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:23:07.893546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:23:07.899256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:23:07.901210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:23:07.904268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:23:07.907866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:23:07.912729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:23:07.916492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:23:07.919844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:23:07.920548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:23:07.925270 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:23:07.929206 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:23:07.931903 dracut-cmdline[224]: dracut-dracut-053 May 8 00:23:07.934595 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:23:07.955329 systemd-resolved[228]: Positive Trust Anchors: May 8 00:23:07.955341 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:23:07.955385 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:23:07.963620 systemd-resolved[228]: Defaulting to hostname 'linux'. May 8 00:23:07.964421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:23:07.965089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:23:08.005160 kernel: SCSI subsystem initialized May 8 00:23:08.012147 kernel: Loading iSCSI transport class v2.0-870. May 8 00:23:08.021130 kernel: iscsi: registered transport (tcp) May 8 00:23:08.037541 kernel: iscsi: registered transport (qla4xxx) May 8 00:23:08.037612 kernel: QLogic iSCSI HBA Driver May 8 00:23:08.075526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:23:08.079301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:23:08.106185 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:23:08.106246 kernel: device-mapper: uevent: version 1.0.3 May 8 00:23:08.110126 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:23:08.149128 kernel: raid6: avx2x4 gen() 34959 MB/s May 8 00:23:08.166118 kernel: raid6: avx2x2 gen() 33389 MB/s May 8 00:23:08.183234 kernel: raid6: avx2x1 gen() 23846 MB/s May 8 00:23:08.183257 kernel: raid6: using algorithm avx2x4 gen() 34959 MB/s May 8 00:23:08.202120 kernel: raid6: .... xor() 4705 MB/s, rmw enabled May 8 00:23:08.202177 kernel: raid6: using avx2x2 recovery algorithm May 8 00:23:08.219139 kernel: xor: automatically using best checksumming function avx May 8 00:23:08.334130 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:23:08.343570 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:23:08.350216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:23:08.366070 systemd-udevd[410]: Using default interface naming scheme 'v255'. May 8 00:23:08.370217 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:23:08.376488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:23:08.388089 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 8 00:23:08.411688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:23:08.416223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:23:08.451827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:23:08.458313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:23:08.473810 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:23:08.475398 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:23:08.476373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:23:08.477012 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:23:08.482222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:23:08.490268 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:23:08.525129 kernel: scsi host0: Virtio SCSI HBA May 8 00:23:08.530607 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:23:08.530639 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 8 00:23:08.542366 kernel: ACPI: bus type USB registered May 8 00:23:08.542393 kernel: usbcore: registered new interface driver usbfs May 8 00:23:08.543580 kernel: usbcore: registered new interface driver hub May 8 00:23:08.544786 kernel: usbcore: registered new device driver usb May 8 00:23:08.589411 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:23:08.589448 kernel: AES CTR mode by8 optimization enabled May 8 00:23:08.588585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:23:08.588686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:23:08.590905 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:23:08.591380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:23:08.591517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:23:08.594825 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:23:08.602405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:23:08.631115 kernel: libata version 3.00 loaded. May 8 00:23:08.635124 kernel: sd 0:0:0:0: Power-on or device reset occurred May 8 00:23:08.636904 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 8 00:23:08.637018 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:23:08.639124 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 8 00:23:08.639223 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 00:23:08.639306 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:23:08.639315 kernel: GPT:17805311 != 80003071 May 8 00:23:08.639323 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:23:08.639334 kernel: GPT:17805311 != 80003071 May 8 00:23:08.639341 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:23:08.639349 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:23:08.639368 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:23:08.641129 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:23:08.655757 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 8 00:23:08.656024 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 8 00:23:08.656329 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 8 00:23:08.656612 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:23:08.656629 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:23:08.656719 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:23:08.656798 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 8 00:23:08.656876 kernel: scsi host1: ahci May 8 00:23:08.656962 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 8 00:23:08.657043 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 8 00:23:08.657246 kernel: scsi host2: ahci May 8 00:23:08.657633 kernel: scsi host3: ahci May 8 00:23:08.657789 kernel: hub 1-0:1.0: USB hub found May 8 00:23:08.657941 kernel: hub 1-0:1.0: 4 ports detected May 8 00:23:08.658044 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 8 00:23:08.658438 kernel: hub 2-0:1.0: USB hub found May 8 00:23:08.658555 kernel: hub 2-0:1.0: 4 ports detected May 8 00:23:08.658725 kernel: scsi host4: ahci May 8 00:23:08.658861 kernel: scsi host5: ahci May 8 00:23:08.658941 kernel: scsi host6: ahci May 8 00:23:08.659034 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 May 8 00:23:08.659043 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 May 8 00:23:08.659051 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 May 8 00:23:08.659061 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 May 8 00:23:08.659069 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 May 8 00:23:08.659076 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 May 8 00:23:08.693909 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 8 00:23:08.712347 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (471) May 8 00:23:08.710452 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:23:08.718132 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (458) May 8 00:23:08.724371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 8 00:23:08.741897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:23:08.748030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 8 00:23:08.748558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 8 00:23:08.758208 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:23:08.760155 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:23:08.763927 disk-uuid[572]: Primary Header is updated. May 8 00:23:08.763927 disk-uuid[572]: Secondary Entries is updated. May 8 00:23:08.763927 disk-uuid[572]: Secondary Header is updated. May 8 00:23:08.771161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:23:08.777258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:23:08.887130 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 8 00:23:08.965664 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 00:23:08.965764 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:23:08.969263 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:23:08.969293 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:23:08.969303 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:23:08.971721 kernel: ata1.00: applying bridge limits May 8 00:23:08.973061 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:23:08.975182 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:23:08.975214 kernel: ata1.00: configured for UDMA/100 May 8 00:23:08.978142 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:23:09.012750 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:23:09.019927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:23:09.019943 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 8 00:23:09.020050 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:23:09.025568 kernel: usbcore: registered new interface driver usbhid May 8 00:23:09.025653 kernel: usbhid: USB HID core driver May 8 00:23:09.030479 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 8 00:23:09.030511 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 8 00:23:09.782172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:23:09.783510 disk-uuid[573]: The operation has completed successfully. May 8 00:23:09.832801 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:23:09.832884 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:23:09.864202 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:23:09.866772 sh[601]: Success May 8 00:23:09.878124 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:23:09.923070 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:23:09.934504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:23:09.935524 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:23:09.951894 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:23:09.951924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:23:09.951934 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:23:09.954964 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:23:09.954987 kernel: BTRFS info (device dm-0): using free space tree May 8 00:23:09.964118 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:23:09.965697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:23:09.966506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:23:09.978195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:23:09.981192 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:23:09.998056 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:23:09.998081 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:23:09.998091 kernel: BTRFS info (device sda6): using free space tree May 8 00:23:10.003887 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:23:10.003910 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:23:10.008120 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:23:10.009675 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:23:10.016208 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:23:10.042139 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:23:10.049383 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:23:10.071217 systemd-networkd[780]: lo: Link UP May 8 00:23:10.071225 systemd-networkd[780]: lo: Gained carrier May 8 00:23:10.076371 systemd-networkd[780]: Enumeration completed May 8 00:23:10.076762 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:23:10.077081 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:10.077084 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:23:10.081060 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:10.081066 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:23:10.081567 systemd-networkd[780]: eth0: Link UP May 8 00:23:10.081570 systemd-networkd[780]: eth0: Gained carrier May 8 00:23:10.081575 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:10.082603 systemd[1]: Reached target network.target - Network. May 8 00:23:10.084452 systemd-networkd[780]: eth1: Link UP May 8 00:23:10.084455 systemd-networkd[780]: eth1: Gained carrier May 8 00:23:10.084461 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:10.090236 ignition[724]: Ignition 2.20.0 May 8 00:23:10.090245 ignition[724]: Stage: fetch-offline May 8 00:23:10.090273 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 8 00:23:10.090280 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:10.090373 ignition[724]: parsed url from cmdline: "" May 8 00:23:10.091772 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:23:10.090376 ignition[724]: no config URL provided May 8 00:23:10.090379 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:23:10.090385 ignition[724]: no config at "/usr/lib/ignition/user.ign" May 8 00:23:10.090388 ignition[724]: failed to fetch config: resource requires networking May 8 00:23:10.090593 ignition[724]: Ignition finished successfully May 8 00:23:10.101217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:23:10.109181 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:23:10.109439 ignition[788]: Ignition 2.20.0 May 8 00:23:10.109444 ignition[788]: Stage: fetch May 8 00:23:10.109555 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 8 00:23:10.109562 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:10.109618 ignition[788]: parsed url from cmdline: "" May 8 00:23:10.109621 ignition[788]: no config URL provided May 8 00:23:10.109624 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:23:10.109629 ignition[788]: no config at "/usr/lib/ignition/user.ign" May 8 00:23:10.109645 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 8 00:23:10.109736 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:23:10.146131 systemd-networkd[780]: eth0: DHCPv4 address 65.21.107.133/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 8 00:23:10.310545 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 8 00:23:10.318065 ignition[788]: GET result: OK May 8 00:23:10.318652 ignition[788]: parsing config with SHA512: 5497a38c280f9685b74b8a9e599c8348cc1ca35c746339796eb2388bf8e13d61a845524af0eefed2514ab3b8caff60ddbc7d9590c93b241d68979802864822c0 May 8 00:23:10.323801 unknown[788]: fetched base config from "system" May 8 00:23:10.324492 unknown[788]: fetched base config from "system" May 8 00:23:10.324809 ignition[788]: fetch: fetch complete May 8 00:23:10.324498 unknown[788]: fetched user config from "hetzner" May 8 00:23:10.324813 ignition[788]: fetch: fetch passed May 8 00:23:10.327260 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:23:10.324849 ignition[788]: Ignition finished successfully May 8 00:23:10.333212 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:23:10.345455 ignition[795]: Ignition 2.20.0 May 8 00:23:10.345467 ignition[795]: Stage: kargs May 8 00:23:10.345609 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 8 00:23:10.345618 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:10.347880 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:23:10.346391 ignition[795]: kargs: kargs passed May 8 00:23:10.346428 ignition[795]: Ignition finished successfully May 8 00:23:10.355249 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:23:10.365974 ignition[802]: Ignition 2.20.0 May 8 00:23:10.365990 ignition[802]: Stage: disks May 8 00:23:10.366186 ignition[802]: no configs at "/usr/lib/ignition/base.d" May 8 00:23:10.368320 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:23:10.366196 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:10.369054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:23:10.366963 ignition[802]: disks: disks passed May 8 00:23:10.369934 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:23:10.366996 ignition[802]: Ignition finished successfully May 8 00:23:10.371123 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:23:10.372280 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:23:10.373165 systemd[1]: Reached target basic.target - Basic System. May 8 00:23:10.383253 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:23:10.394640 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:23:10.396204 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:23:10.401336 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:23:10.470128 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:23:10.470138 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:23:10.470972 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:23:10.477165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:23:10.479187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:23:10.483241 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 00:23:10.484494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:23:10.485192 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:23:10.487763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:23:10.493230 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:23:10.496690 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (818) May 8 00:23:10.499194 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:23:10.499223 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:23:10.501623 kernel: BTRFS info (device sda6): using free space tree May 8 00:23:10.507678 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:23:10.507700 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:23:10.509544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:23:10.540021 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:23:10.542139 coreos-metadata[820]: May 08 00:23:10.542 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 8 00:23:10.543481 coreos-metadata[820]: May 08 00:23:10.542 INFO Fetch successful May 8 00:23:10.543481 coreos-metadata[820]: May 08 00:23:10.543 INFO wrote hostname ci-4230-1-1-n-afb3d0be43 to /sysroot/etc/hostname May 8 00:23:10.545580 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory May 8 00:23:10.545470 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:23:10.548817 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:23:10.551549 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:23:10.618431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:23:10.623186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:23:10.626478 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:23:10.631117 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:23:10.646877 ignition[934]: INFO : Ignition 2.20.0 May 8 00:23:10.648127 ignition[934]: INFO : Stage: mount May 8 00:23:10.648127 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:23:10.648127 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:10.651621 ignition[934]: INFO : mount: mount passed May 8 00:23:10.651621 ignition[934]: INFO : Ignition finished successfully May 8 00:23:10.649322 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:23:10.655253 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:23:10.655862 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:23:10.949220 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:23:10.954218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:23:10.965135 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) May 8 00:23:10.969990 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:23:10.970032 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:23:10.972735 kernel: BTRFS info (device sda6): using free space tree May 8 00:23:10.981821 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:23:10.981868 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:23:10.988069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:23:11.017438 ignition[963]: INFO : Ignition 2.20.0 May 8 00:23:11.018666 ignition[963]: INFO : Stage: files May 8 00:23:11.018666 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:23:11.018666 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:11.022205 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 8 00:23:11.022205 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:23:11.022205 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:23:11.027030 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:23:11.027030 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:23:11.027030 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:23:11.025587 unknown[963]: wrote ssh authorized keys file for user: core May 8 00:23:11.032355 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:23:11.032355 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:23:11.352848 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:23:11.586284 systemd-networkd[780]: eth1: Gained IPv6LL May 8 00:23:12.098269 systemd-networkd[780]: eth0: Gained IPv6LL May 8 00:23:14.596994 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:23:14.596994 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:23:14.601173 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:23:15.296669 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:23:15.350489 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:23:15.350489 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:23:15.353833 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:23:15.938332 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:23:16.114205 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:23:16.114205 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:23:16.117482 ignition[963]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:23:16.117482 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:23:16.117482 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:23:16.117482 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:23:16.117482 ignition[963]: INFO : files: files passed May 8 00:23:16.117482 ignition[963]: INFO : Ignition finished successfully May 8 00:23:16.117351 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:23:16.126282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:23:16.130243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:23:16.131264 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:23:16.131358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:23:16.148865 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:23:16.148865 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:23:16.151011 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:23:16.151678 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:23:16.152624 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:23:16.158242 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:23:16.179659 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:23:16.179738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:23:16.180586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:23:16.181206 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:23:16.182292 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:23:16.184000 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:23:16.194323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:23:16.200217 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:23:16.208037 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:23:16.208611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:23:16.209650 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:23:16.210655 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:23:16.210743 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:23:16.212751 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:23:16.213750 systemd[1]: Stopped target basic.target - Basic System. May 8 00:23:16.214252 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:23:16.215189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:23:16.216177 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:23:16.217228 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:23:16.218284 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:23:16.219389 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:23:16.220502 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:23:16.221465 systemd[1]: Stopped target swap.target - Swaps. May 8 00:23:16.222300 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:23:16.222398 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:23:16.223521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:23:16.224183 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:23:16.225164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:23:16.225252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:23:16.226229 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:23:16.226309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:23:16.227592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:23:16.227682 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:23:16.228871 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:23:16.228982 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:23:16.229731 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 00:23:16.229841 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:23:16.248277 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:23:16.248739 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:23:16.248866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:23:16.252243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:23:16.255904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:23:16.256042 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:23:16.258026 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:23:16.258189 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:23:16.264129 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:23:16.268778 ignition[1016]: INFO : Ignition 2.20.0 May 8 00:23:16.268778 ignition[1016]: INFO : Stage: umount May 8 00:23:16.268778 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:23:16.268778 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 8 00:23:16.268778 ignition[1016]: INFO : umount: umount passed May 8 00:23:16.268778 ignition[1016]: INFO : Ignition finished successfully May 8 00:23:16.264191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:23:16.270780 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:23:16.271180 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:23:16.274082 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:23:16.274485 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:23:16.274522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:23:16.275672 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:23:16.275710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:23:16.276172 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:23:16.276206 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:23:16.277036 systemd[1]: Stopped target network.target - Network. May 8 00:23:16.278558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:23:16.278596 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:23:16.279536 systemd[1]: Stopped target paths.target - Path Units. May 8 00:23:16.280476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:23:16.282278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:23:16.282974 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:23:16.283850 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:23:16.284740 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:23:16.284767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:23:16.285640 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:23:16.285662 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:23:16.286503 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:23:16.286537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:23:16.287636 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:23:16.287668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:23:16.288756 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:23:16.289613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:23:16.292253 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:23:16.292319 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:23:16.293157 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:23:16.293239 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:23:16.295871 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:23:16.296518 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:23:16.296574 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:23:16.297398 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:23:16.297433 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:23:16.299953 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:23:16.300443 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:23:16.300515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:23:16.302269 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:23:16.302595 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:23:16.302636 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:23:16.306311 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:23:16.307706 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:23:16.307745 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:23:16.309366 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:23:16.309415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:23:16.310713 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:23:16.310746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:23:16.311464 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:23:16.313295 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:23:16.321415 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:23:16.321512 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:23:16.322651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:23:16.322725 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:23:16.324537 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:23:16.324573 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:23:16.325471 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:23:16.325513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:23:16.326920 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:23:16.326958 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:23:16.327971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:23:16.328008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:23:16.338260 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:23:16.339480 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:23:16.339529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:23:16.342145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:23:16.342186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:23:16.343054 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:23:16.343147 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:23:16.343831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:23:16.343898 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:23:16.345429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:23:16.354298 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:23:16.359712 systemd[1]: Switching root. May 8 00:23:16.408645 systemd-journald[188]: Journal stopped May 8 00:23:17.235113 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). May 8 00:23:17.235170 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:23:17.235181 kernel: SELinux: policy capability open_perms=1 May 8 00:23:17.235190 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:23:17.235197 kernel: SELinux: policy capability always_check_network=0 May 8 00:23:17.235205 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:23:17.235216 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:23:17.235224 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:23:17.235235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:23:17.235243 kernel: audit: type=1403 audit(1746663796.515:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:23:17.235251 systemd[1]: Successfully loaded SELinux policy in 35.686ms. May 8 00:23:17.235265 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.080ms. May 8 00:23:17.235274 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:23:17.235287 systemd[1]: Detected virtualization kvm. May 8 00:23:17.235295 systemd[1]: Detected architecture x86-64. May 8 00:23:17.235305 systemd[1]: Detected first boot. May 8 00:23:17.235313 systemd[1]: Hostname set to . May 8 00:23:17.235322 systemd[1]: Initializing machine ID from VM UUID. May 8 00:23:17.235330 zram_generator::config[1062]: No configuration found. May 8 00:23:17.235340 kernel: Guest personality initialized and is inactive May 8 00:23:17.235351 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:23:17.235359 kernel: Initialized host personality May 8 00:23:17.235366 kernel: NET: Registered PF_VSOCK protocol family May 8 00:23:17.235376 systemd[1]: Populated /etc with preset unit settings. May 8 00:23:17.235395 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:23:17.235404 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:23:17.235413 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:23:17.235422 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:23:17.235430 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:23:17.235439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:23:17.235447 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:23:17.235455 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:23:17.235465 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:23:17.235474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:23:17.235482 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:23:17.235491 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:23:17.235499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:23:17.235508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:23:17.235516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:23:17.235525 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:23:17.235535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:23:17.235544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:23:17.235552 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:23:17.235560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:23:17.235569 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:23:17.235578 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:23:17.235586 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:23:17.235596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:23:17.235606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:23:17.235615 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:23:17.235623 systemd[1]: Reached target slices.target - Slice Units. May 8 00:23:17.235631 systemd[1]: Reached target swap.target - Swaps. May 8 00:23:17.235639 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:23:17.235648 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:23:17.235660 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:23:17.235669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:23:17.235678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:23:17.235687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:23:17.235695 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:23:17.235705 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:23:17.235714 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:23:17.235722 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:23:17.235732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:23:17.235755 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:23:17.235764 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:23:17.235772 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:23:17.235781 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:23:17.235793 systemd[1]: Reached target machines.target - Containers. May 8 00:23:17.235802 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:23:17.235811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:23:17.235822 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:23:17.235830 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:23:17.235839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:23:17.235847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:23:17.235855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:23:17.235864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:23:17.235872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:23:17.235881 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:23:17.235891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:23:17.235900 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:23:17.235909 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:23:17.235917 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:23:17.235926 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:23:17.235934 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:23:17.235943 kernel: loop: module loaded May 8 00:23:17.235951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:23:17.235960 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:23:17.235970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:23:17.235979 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:23:17.235987 kernel: fuse: init (API version 7.39) May 8 00:23:17.235996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:23:17.236004 kernel: ACPI: bus type drm_connector registered May 8 00:23:17.236014 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:23:17.236022 systemd[1]: Stopped verity-setup.service. May 8 00:23:17.236031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:23:17.236039 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:23:17.236047 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:23:17.236055 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:23:17.236063 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:23:17.236085 systemd-journald[1143]: Collecting audit messages is disabled. May 8 00:23:17.238292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:23:17.238316 systemd-journald[1143]: Journal started May 8 00:23:17.238340 systemd-journald[1143]: Runtime Journal (/run/log/journal/6b05becd1a8c4ca5ac30a81ef762042d) is 4.8M, max 38.3M, 33.5M free. May 8 00:23:16.960699 systemd[1]: Queued start job for default target multi-user.target. May 8 00:23:16.978181 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:23:16.978653 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:23:17.242946 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:23:17.241578 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:23:17.242206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:23:17.242829 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:23:17.242943 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:23:17.243572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:23:17.243687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:23:17.244302 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:23:17.244450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:23:17.245033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:23:17.245348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:23:17.246317 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:23:17.246452 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:23:17.247041 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:23:17.247177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:23:17.248410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:23:17.249019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:23:17.249676 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:23:17.250332 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:23:17.256461 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:23:17.260035 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:23:17.267782 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:23:17.271154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:23:17.271624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:23:17.271649 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:23:17.272788 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:23:17.277342 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:23:17.281608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:23:17.282229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:23:17.289204 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:23:17.290833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:23:17.293461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:23:17.294617 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:23:17.295640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:23:17.307190 systemd-journald[1143]: Time spent on flushing to /var/log/journal/6b05becd1a8c4ca5ac30a81ef762042d is 22.153ms for 1140 entries. May 8 00:23:17.307190 systemd-journald[1143]: System Journal (/var/log/journal/6b05becd1a8c4ca5ac30a81ef762042d) is 8M, max 584.8M, 576.8M free. May 8 00:23:17.346458 systemd-journald[1143]: Received client request to flush runtime journal. May 8 00:23:17.346492 kernel: loop0: detected capacity change from 0 to 8 May 8 00:23:17.298180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:23:17.309580 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:23:17.321230 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:23:17.326175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:23:17.329173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:23:17.329707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:23:17.330369 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:23:17.338378 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:23:17.342276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:23:17.349739 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:23:17.350643 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:23:17.352652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:23:17.356278 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:23:17.360236 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:23:17.371048 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:23:17.376131 kernel: loop1: detected capacity change from 0 to 147912 May 8 00:23:17.378323 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:23:17.391000 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:23:17.396976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:23:17.419140 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:23:17.423271 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 8 00:23:17.423286 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 8 00:23:17.427940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:23:17.459120 kernel: loop3: detected capacity change from 0 to 218376 May 8 00:23:17.502130 kernel: loop4: detected capacity change from 0 to 8 May 8 00:23:17.504760 kernel: loop5: detected capacity change from 0 to 147912 May 8 00:23:17.525125 kernel: loop6: detected capacity change from 0 to 138176 May 8 00:23:17.540129 kernel: loop7: detected capacity change from 0 to 218376 May 8 00:23:17.560391 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 8 00:23:17.563320 (sd-merge)[1213]: Merged extensions into '/usr'. May 8 00:23:17.567469 systemd[1]: Reload requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:23:17.567487 systemd[1]: Reloading... May 8 00:23:17.642124 zram_generator::config[1241]: No configuration found. May 8 00:23:17.749120 ldconfig[1183]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:23:17.767656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:23:17.820769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:23:17.821297 systemd[1]: Reloading finished in 252 ms. May 8 00:23:17.842673 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:23:17.843411 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:23:17.844239 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:23:17.855261 systemd[1]: Starting ensure-sysext.service... May 8 00:23:17.858220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:23:17.861210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:23:17.873997 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:23:17.874308 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:23:17.874848 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:23:17.875029 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 8 00:23:17.875067 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 8 00:23:17.875199 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... May 8 00:23:17.875213 systemd[1]: Reloading... May 8 00:23:17.879066 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:23:17.879080 systemd-tmpfiles[1286]: Skipping /boot May 8 00:23:17.893761 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:23:17.893775 systemd-tmpfiles[1286]: Skipping /boot May 8 00:23:17.902625 systemd-udevd[1287]: Using default interface naming scheme 'v255'. May 8 00:23:17.958125 zram_generator::config[1314]: No configuration found. May 8 00:23:18.032354 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1326) May 8 00:23:18.092123 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:23:18.103140 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 8 00:23:18.108125 kernel: ACPI: button: Power Button [PWRF] May 8 00:23:18.124666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:23:18.168807 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:23:18.171197 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:23:18.171588 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:23:18.175212 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 8 00:23:18.197122 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 8 00:23:18.208547 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 8 00:23:18.215399 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:23:18.215638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:23:18.217815 kernel: Console: switching to colour dummy device 80x25 May 8 00:23:18.219364 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 8 00:23:18.220639 kernel: [drm] features: -context_init May 8 00:23:18.220655 kernel: [drm] number of scanouts: 1 May 8 00:23:18.220720 kernel: [drm] number of cap sets: 0 May 8 00:23:18.219886 systemd[1]: Reloading finished in 344 ms. May 8 00:23:18.223191 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 8 00:23:18.230700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:23:18.233357 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 8 00:23:18.233410 kernel: Console: switching to colour frame buffer device 160x50 May 8 00:23:18.244039 kernel: EDAC MC: Ver: 3.0.0 May 8 00:23:18.244073 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 8 00:23:18.268573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:23:18.297797 systemd[1]: Finished ensure-sysext.service. May 8 00:23:18.317339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:23:18.322242 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:23:18.331235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:23:18.332371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:23:18.334264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:23:18.339840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:23:18.342170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:23:18.344082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:23:18.344265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:23:18.346345 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:23:18.348160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:23:18.352346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:23:18.356241 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:23:18.358601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:23:18.369227 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:23:18.370632 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:23:18.372301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:23:18.373184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:23:18.373810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:23:18.373930 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:23:18.374413 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:23:18.374992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:23:18.379429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:23:18.379555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:23:18.379950 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:23:18.380072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:23:18.387064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:23:18.387148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:23:18.393267 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:23:18.394846 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:23:18.398525 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:23:18.413408 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:23:18.422470 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:23:18.441663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:23:18.444171 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:23:18.452272 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:23:18.452704 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:23:18.455765 augenrules[1445]: No rules May 8 00:23:18.456881 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:23:18.457884 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:23:18.487817 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:23:18.497212 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:23:18.497748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:23:18.505727 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:23:18.521119 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:23:18.529177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:23:18.530448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:23:18.534241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:23:18.558147 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:23:18.567126 systemd-networkd[1410]: lo: Link UP May 8 00:23:18.567133 systemd-networkd[1410]: lo: Gained carrier May 8 00:23:18.568821 systemd-networkd[1410]: Enumeration completed May 8 00:23:18.568887 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:23:18.573245 systemd-resolved[1411]: Positive Trust Anchors: May 8 00:23:18.573255 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:23:18.573278 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:23:18.576307 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:23:18.578522 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:18.578527 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:23:18.582020 systemd-resolved[1411]: Using system hostname 'ci-4230-1-1-n-afb3d0be43'. May 8 00:23:18.583504 systemd-networkd[1410]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:18.583512 systemd-networkd[1410]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:23:18.584917 systemd-networkd[1410]: eth0: Link UP May 8 00:23:18.584920 systemd-networkd[1410]: eth0: Gained carrier May 8 00:23:18.584932 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:18.590499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:23:18.591812 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:23:18.592309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:23:18.594082 systemd[1]: Reached target network.target - Network. May 8 00:23:18.594768 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:23:18.595157 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:23:18.595560 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:23:18.595920 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:23:18.596193 systemd-networkd[1410]: eth1: Link UP May 8 00:23:18.596196 systemd-networkd[1410]: eth1: Gained carrier May 8 00:23:18.596207 systemd-networkd[1410]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:23:18.601131 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:23:18.601595 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:23:18.601617 systemd[1]: Reached target paths.target - Path Units. May 8 00:23:18.601965 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:23:18.604646 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:23:18.605037 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:23:18.606241 systemd[1]: Reached target timers.target - Timer Units. May 8 00:23:18.609519 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:23:18.613264 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:23:18.616168 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:23:18.617912 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:23:18.618645 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:23:18.622351 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:23:18.623464 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:23:18.625501 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:23:18.626787 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:23:18.627632 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:23:18.628026 systemd[1]: Reached target basic.target - Basic System. May 8 00:23:18.629143 systemd-networkd[1410]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:23:18.630083 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 8 00:23:18.630759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:23:18.630791 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:23:18.637246 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:23:18.640120 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:23:18.646422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:23:18.649337 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:23:18.654046 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:23:18.654772 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:23:18.657292 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:23:18.667291 jq[1476]: false May 8 00:23:18.661169 systemd-networkd[1410]: eth0: DHCPv4 address 65.21.107.133/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 8 00:23:18.662083 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 8 00:23:18.662566 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:23:18.667325 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 8 00:23:18.671239 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:23:18.673881 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:23:18.683230 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:23:18.686308 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:23:18.686741 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:23:18.688765 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:23:18.694214 extend-filesystems[1477]: Found loop4 May 8 00:23:18.694214 extend-filesystems[1477]: Found loop5 May 8 00:23:18.694214 extend-filesystems[1477]: Found loop6 May 8 00:23:18.694214 extend-filesystems[1477]: Found loop7 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda May 8 00:23:18.694214 extend-filesystems[1477]: Found sda1 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda2 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda3 May 8 00:23:18.694214 extend-filesystems[1477]: Found usr May 8 00:23:18.694214 extend-filesystems[1477]: Found sda4 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda6 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda7 May 8 00:23:18.694214 extend-filesystems[1477]: Found sda9 May 8 00:23:18.694214 extend-filesystems[1477]: Checking size of /dev/sda9 May 8 00:23:18.778118 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 8 00:23:18.742496 dbus-daemon[1475]: [system] SELinux support is enabled May 8 00:23:18.700189 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:23:18.778415 coreos-metadata[1472]: May 08 00:23:18.728 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 8 00:23:18.778415 coreos-metadata[1472]: May 08 00:23:18.745 INFO Fetch successful May 8 00:23:18.778415 coreos-metadata[1472]: May 08 00:23:18.745 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 8 00:23:18.778415 coreos-metadata[1472]: May 08 00:23:18.747 INFO Fetch successful May 8 00:23:18.778577 extend-filesystems[1477]: Resized partition /dev/sda9 May 8 00:23:18.713366 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:23:18.778978 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) May 8 00:23:18.789567 jq[1490]: true May 8 00:23:18.713714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:23:18.717374 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:23:18.789927 update_engine[1487]: I20250508 00:23:18.740777 1487 main.cc:92] Flatcar Update Engine starting May 8 00:23:18.789927 update_engine[1487]: I20250508 00:23:18.748597 1487 update_check_scheduler.cc:74] Next update check in 9m11s May 8 00:23:18.717526 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:23:18.732422 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:23:18.792287 jq[1503]: true May 8 00:23:18.732567 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:23:18.742947 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:23:18.760486 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:23:18.765062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:23:18.765092 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:23:18.770260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:23:18.770277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:23:18.770737 systemd[1]: Started update-engine.service - Update Engine. May 8 00:23:18.787475 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:23:18.799447 tar[1501]: linux-amd64/LICENSE May 8 00:23:18.799447 tar[1501]: linux-amd64/helm May 8 00:23:18.842631 systemd-logind[1485]: New seat seat0. May 8 00:23:18.847318 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) May 8 00:23:18.847333 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:23:18.853289 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:23:18.868653 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1325) May 8 00:23:18.876488 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:23:18.877311 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:23:18.929589 bash[1540]: Updated "/home/core/.ssh/authorized_keys" May 8 00:23:18.932540 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:23:18.948773 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 8 00:23:18.956240 systemd[1]: Starting sshkeys.service... May 8 00:23:18.974614 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:23:18.981404 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:23:18.977655 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:23:18.981597 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 00:23:18.981597 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 May 8 00:23:18.981597 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 8 00:23:18.987686 extend-filesystems[1477]: Resized filesystem in /dev/sda9 May 8 00:23:18.987686 extend-filesystems[1477]: Found sr0 May 8 00:23:18.991313 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:23:18.992090 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:23:18.992257 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:23:19.019793 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:23:19.029164 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:23:19.041135 coreos-metadata[1558]: May 08 00:23:19.040 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 8 00:23:19.042282 coreos-metadata[1558]: May 08 00:23:19.042 INFO Fetch successful May 8 00:23:19.046157 unknown[1558]: wrote ssh authorized keys file for user: core May 8 00:23:19.046259 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:23:19.046447 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:23:19.060731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:23:19.091160 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:23:19.101360 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:23:19.107338 update-ssh-keys[1573]: Updated "/home/core/.ssh/authorized_keys" May 8 00:23:19.109694 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:23:19.111880 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:23:19.114774 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:23:19.120677 systemd[1]: Finished sshkeys.service. May 8 00:23:19.121065 containerd[1506]: time="2025-05-08T00:23:19.121001451Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:23:19.145717 containerd[1506]: time="2025-05-08T00:23:19.145483337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.146871199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.146893120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.146907627Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147025779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147040597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147089178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147117100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147284294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147296446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147307547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:23:19.147854 containerd[1506]: time="2025-05-08T00:23:19.147314470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147372148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147548249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147638718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147649649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147706906Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:23:19.148041 containerd[1506]: time="2025-05-08T00:23:19.147742402Z" level=info msg="metadata content store policy set" policy=shared May 8 00:23:19.152083 containerd[1506]: time="2025-05-08T00:23:19.152065760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:23:19.152178 containerd[1506]: time="2025-05-08T00:23:19.152165737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:23:19.152228 containerd[1506]: time="2025-05-08T00:23:19.152218786Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:23:19.152272 containerd[1506]: time="2025-05-08T00:23:19.152263470Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:23:19.152334 containerd[1506]: time="2025-05-08T00:23:19.152322751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:23:19.152478 containerd[1506]: time="2025-05-08T00:23:19.152464036Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:23:19.152707 containerd[1506]: time="2025-05-08T00:23:19.152694077Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:23:19.152821 containerd[1506]: time="2025-05-08T00:23:19.152807700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:23:19.152869 containerd[1506]: time="2025-05-08T00:23:19.152859558Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:23:19.152911 containerd[1506]: time="2025-05-08T00:23:19.152902368Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:23:19.152950 containerd[1506]: time="2025-05-08T00:23:19.152942213Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153002 containerd[1506]: time="2025-05-08T00:23:19.152991916Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153043 containerd[1506]: time="2025-05-08T00:23:19.153034486Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153089 containerd[1506]: time="2025-05-08T00:23:19.153080302Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153151 containerd[1506]: time="2025-05-08T00:23:19.153141126Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153190 containerd[1506]: time="2025-05-08T00:23:19.153182293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153236 containerd[1506]: time="2025-05-08T00:23:19.153227658Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153282 containerd[1506]: time="2025-05-08T00:23:19.153273284Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:23:19.153328 containerd[1506]: time="2025-05-08T00:23:19.153320131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153372 containerd[1506]: time="2025-05-08T00:23:19.153363964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153437 containerd[1506]: time="2025-05-08T00:23:19.153426250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153480 containerd[1506]: time="2025-05-08T00:23:19.153472056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153733 containerd[1506]: time="2025-05-08T00:23:19.153721374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153775 containerd[1506]: time="2025-05-08T00:23:19.153767270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153812 containerd[1506]: time="2025-05-08T00:23:19.153804359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153848 containerd[1506]: time="2025-05-08T00:23:19.153840858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153889 containerd[1506]: time="2025-05-08T00:23:19.153881374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153937 containerd[1506]: time="2025-05-08T00:23:19.153928001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:23:19.153976 containerd[1506]: time="2025-05-08T00:23:19.153968568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154012 containerd[1506]: time="2025-05-08T00:23:19.154004455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154048 containerd[1506]: time="2025-05-08T00:23:19.154040602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154090 containerd[1506]: time="2025-05-08T00:23:19.154081760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:23:19.154157 containerd[1506]: time="2025-05-08T00:23:19.154147583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154199 containerd[1506]: time="2025-05-08T00:23:19.154190694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154249 containerd[1506]: time="2025-05-08T00:23:19.154240047Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:23:19.154321 containerd[1506]: time="2025-05-08T00:23:19.154310449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:23:19.154367 containerd[1506]: time="2025-05-08T00:23:19.154356896Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:23:19.154422 containerd[1506]: time="2025-05-08T00:23:19.154412650Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:23:19.154466 containerd[1506]: time="2025-05-08T00:23:19.154455630Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:23:19.154501 containerd[1506]: time="2025-05-08T00:23:19.154493842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154545 containerd[1506]: time="2025-05-08T00:23:19.154536432Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:23:19.154582 containerd[1506]: time="2025-05-08T00:23:19.154574393Z" level=info msg="NRI interface is disabled by configuration." May 8 00:23:19.154617 containerd[1506]: time="2025-05-08T00:23:19.154609399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:23:19.154874 containerd[1506]: time="2025-05-08T00:23:19.154839761Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:23:19.154999 containerd[1506]: time="2025-05-08T00:23:19.154988400Z" level=info msg="Connect containerd service" May 8 00:23:19.155066 containerd[1506]: time="2025-05-08T00:23:19.155056017Z" level=info msg="using legacy CRI server" May 8 00:23:19.155129 containerd[1506]: time="2025-05-08T00:23:19.155118644Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:23:19.155249 containerd[1506]: time="2025-05-08T00:23:19.155238439Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:23:19.155711 containerd[1506]: time="2025-05-08T00:23:19.155693141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:23:19.155975 containerd[1506]: time="2025-05-08T00:23:19.155922401Z" level=info msg="Start subscribing containerd event" May 8 00:23:19.156006 containerd[1506]: time="2025-05-08T00:23:19.155987834Z" level=info msg="Start recovering state" May 8 00:23:19.156056 containerd[1506]: time="2025-05-08T00:23:19.156042075Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:23:19.156153 containerd[1506]: time="2025-05-08T00:23:19.156140189Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:23:19.156204 containerd[1506]: time="2025-05-08T00:23:19.156044630Z" level=info msg="Start event monitor" May 8 00:23:19.156253 containerd[1506]: time="2025-05-08T00:23:19.156244445Z" level=info msg="Start snapshots syncer" May 8 00:23:19.156297 containerd[1506]: time="2025-05-08T00:23:19.156288167Z" level=info msg="Start cni network conf syncer for default" May 8 00:23:19.156339 containerd[1506]: time="2025-05-08T00:23:19.156329765Z" level=info msg="Start streaming server" May 8 00:23:19.156437 containerd[1506]: time="2025-05-08T00:23:19.156425885Z" level=info msg="containerd successfully booted in 0.036669s" May 8 00:23:19.156491 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:23:19.389155 tar[1501]: linux-amd64/README.md May 8 00:23:19.396364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:23:19.906248 systemd-networkd[1410]: eth0: Gained IPv6LL May 8 00:23:19.907144 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 8 00:23:19.909417 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:23:19.911055 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:23:19.923258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:23:19.927651 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:23:19.948702 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:23:20.034276 systemd-networkd[1410]: eth1: Gained IPv6LL May 8 00:23:20.034676 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 8 00:23:20.685808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:23:20.690212 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:23:20.691151 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:23:20.691509 systemd[1]: Startup finished in 1.154s (kernel) + 8.851s (initrd) + 4.211s (userspace) = 14.217s. May 8 00:23:21.160276 kubelet[1602]: E0508 00:23:21.160159 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:23:21.162336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:23:21.162461 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:23:21.162691 systemd[1]: kubelet.service: Consumed 787ms CPU time, 250M memory peak. May 8 00:23:31.301154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:23:31.306268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:23:31.373486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:23:31.375781 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:23:31.409853 kubelet[1620]: E0508 00:23:31.409752 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:23:31.412043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:23:31.412198 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:23:31.412531 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. May 8 00:23:41.551231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:23:41.556492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:23:41.628775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:23:41.638341 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:23:41.671874 kubelet[1636]: E0508 00:23:41.671815 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:23:41.674322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:23:41.674448 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:23:41.674747 systemd[1]: kubelet.service: Consumed 103ms CPU time, 102.3M memory peak. May 8 00:23:51.225037 systemd-resolved[1411]: Clock change detected. Flushing caches. May 8 00:23:51.225161 systemd-timesyncd[1412]: Contacted time server 5.45.97.204:123 (2.flatcar.pool.ntp.org). May 8 00:23:51.225248 systemd-timesyncd[1412]: Initial clock synchronization to Thu 2025-05-08 00:23:51.224929 UTC. May 8 00:23:52.754725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:23:52.759616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:23:52.844357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:23:52.847036 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:23:52.890438 kubelet[1650]: E0508 00:23:52.890331 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:23:52.892230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:23:52.892486 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:23:52.892928 systemd[1]: kubelet.service: Consumed 120ms CPU time, 106.1M memory peak. May 8 00:24:03.004719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:24:03.009690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:03.088868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:03.091548 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:03.118080 kubelet[1666]: E0508 00:24:03.118024 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:03.119642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:03.119778 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:03.120127 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.6M memory peak. May 8 00:24:05.473579 update_engine[1487]: I20250508 00:24:05.473440 1487 update_attempter.cc:509] Updating boot flags... May 8 00:24:05.509500 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1682) May 8 00:24:05.556587 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1684) May 8 00:24:05.599533 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1684) May 8 00:24:13.254607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 8 00:24:13.259885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:13.345410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:13.348390 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:13.376977 kubelet[1702]: E0508 00:24:13.376936 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:13.380597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:13.380726 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:13.381004 systemd[1]: kubelet.service: Consumed 108ms CPU time, 103M memory peak. May 8 00:24:23.504583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 8 00:24:23.509936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:23.625308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:23.628365 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:23.665972 kubelet[1717]: E0508 00:24:23.665914 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:23.668481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:23.668619 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:23.668886 systemd[1]: kubelet.service: Consumed 144ms CPU time, 104M memory peak. May 8 00:24:33.754688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 8 00:24:33.760619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:33.838952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:33.841336 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:33.871778 kubelet[1732]: E0508 00:24:33.871744 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:33.873562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:33.873723 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:33.874131 systemd[1]: kubelet.service: Consumed 101ms CPU time, 101M memory peak. May 8 00:24:44.004764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 8 00:24:44.009621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:44.094038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:44.098738 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:44.133587 kubelet[1747]: E0508 00:24:44.133535 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:44.134875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:44.134987 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:44.135223 systemd[1]: kubelet.service: Consumed 108ms CPU time, 103.7M memory peak. May 8 00:24:54.254824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 8 00:24:54.261751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:24:54.384607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:24:54.387946 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:24:54.420032 kubelet[1762]: E0508 00:24:54.419975 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:24:54.422790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:24:54.422917 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:24:54.423170 systemd[1]: kubelet.service: Consumed 135ms CPU time, 101.2M memory peak. May 8 00:25:04.504798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 8 00:25:04.509857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:04.586751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:04.589134 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:04.615650 kubelet[1777]: E0508 00:25:04.615612 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:04.617748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:04.617893 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:04.618346 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.6M memory peak. May 8 00:25:08.196639 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:25:08.202662 systemd[1]: Started sshd@0-65.21.107.133:22-139.178.89.65:59646.service - OpenSSH per-connection server daemon (139.178.89.65:59646). May 8 00:25:09.177553 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 59646 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:09.179286 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:09.188668 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:25:09.193943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:25:09.196516 systemd-logind[1485]: New session 1 of user core. May 8 00:25:09.203902 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:25:09.209736 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:25:09.214719 (systemd)[1789]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:25:09.217286 systemd-logind[1485]: New session c1 of user core. May 8 00:25:09.338743 systemd[1789]: Queued start job for default target default.target. May 8 00:25:09.348355 systemd[1789]: Created slice app.slice - User Application Slice. May 8 00:25:09.348385 systemd[1789]: Reached target paths.target - Paths. May 8 00:25:09.348530 systemd[1789]: Reached target timers.target - Timers. May 8 00:25:09.349630 systemd[1789]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:25:09.359819 systemd[1789]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:25:09.359885 systemd[1789]: Reached target sockets.target - Sockets. May 8 00:25:09.359926 systemd[1789]: Reached target basic.target - Basic System. May 8 00:25:09.359960 systemd[1789]: Reached target default.target - Main User Target. May 8 00:25:09.359986 systemd[1789]: Startup finished in 136ms. May 8 00:25:09.360029 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:25:09.370563 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:25:10.054693 systemd[1]: Started sshd@1-65.21.107.133:22-139.178.89.65:59652.service - OpenSSH per-connection server daemon (139.178.89.65:59652). May 8 00:25:11.017925 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 59652 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:11.019224 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:11.023675 systemd-logind[1485]: New session 2 of user core. May 8 00:25:11.031607 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:25:11.690117 sshd[1802]: Connection closed by 139.178.89.65 port 59652 May 8 00:25:11.690735 sshd-session[1800]: pam_unix(sshd:session): session closed for user core May 8 00:25:11.693687 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. May 8 00:25:11.694211 systemd[1]: sshd@1-65.21.107.133:22-139.178.89.65:59652.service: Deactivated successfully. May 8 00:25:11.695787 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:25:11.696552 systemd-logind[1485]: Removed session 2. May 8 00:25:11.860734 systemd[1]: Started sshd@2-65.21.107.133:22-139.178.89.65:59662.service - OpenSSH per-connection server daemon (139.178.89.65:59662). May 8 00:25:12.825144 sshd[1808]: Accepted publickey for core from 139.178.89.65 port 59662 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:12.826353 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:12.831647 systemd-logind[1485]: New session 3 of user core. May 8 00:25:12.836644 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:25:13.493437 sshd[1810]: Connection closed by 139.178.89.65 port 59662 May 8 00:25:13.493959 sshd-session[1808]: pam_unix(sshd:session): session closed for user core May 8 00:25:13.496253 systemd[1]: sshd@2-65.21.107.133:22-139.178.89.65:59662.service: Deactivated successfully. May 8 00:25:13.498293 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. May 8 00:25:13.498332 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:25:13.499378 systemd-logind[1485]: Removed session 3. May 8 00:25:13.664765 systemd[1]: Started sshd@3-65.21.107.133:22-139.178.89.65:59670.service - OpenSSH per-connection server daemon (139.178.89.65:59670). May 8 00:25:14.631129 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 59670 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:14.632381 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:14.633441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 8 00:25:14.640694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:14.646623 systemd-logind[1485]: New session 4 of user core. May 8 00:25:14.647883 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:25:14.723112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:14.725836 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:14.756492 kubelet[1827]: E0508 00:25:14.755337 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:14.757382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:14.757582 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:14.757882 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105.7M memory peak. May 8 00:25:15.302232 sshd[1821]: Connection closed by 139.178.89.65 port 59670 May 8 00:25:15.302796 sshd-session[1816]: pam_unix(sshd:session): session closed for user core May 8 00:25:15.305983 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. May 8 00:25:15.306613 systemd[1]: sshd@3-65.21.107.133:22-139.178.89.65:59670.service: Deactivated successfully. May 8 00:25:15.308121 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:25:15.308957 systemd-logind[1485]: Removed session 4. May 8 00:25:15.472681 systemd[1]: Started sshd@4-65.21.107.133:22-139.178.89.65:59686.service - OpenSSH per-connection server daemon (139.178.89.65:59686). May 8 00:25:16.434235 sshd[1839]: Accepted publickey for core from 139.178.89.65 port 59686 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:16.435646 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:16.440042 systemd-logind[1485]: New session 5 of user core. May 8 00:25:16.455650 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:25:16.955316 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:25:16.955627 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:16.974317 sudo[1842]: pam_unix(sudo:session): session closed for user root May 8 00:25:17.130722 sshd[1841]: Connection closed by 139.178.89.65 port 59686 May 8 00:25:17.131442 sshd-session[1839]: pam_unix(sshd:session): session closed for user core May 8 00:25:17.134957 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. May 8 00:25:17.135268 systemd[1]: sshd@4-65.21.107.133:22-139.178.89.65:59686.service: Deactivated successfully. May 8 00:25:17.136875 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:25:17.137694 systemd-logind[1485]: Removed session 5. May 8 00:25:17.302938 systemd[1]: Started sshd@5-65.21.107.133:22-139.178.89.65:58862.service - OpenSSH per-connection server daemon (139.178.89.65:58862). May 8 00:25:18.271265 sshd[1848]: Accepted publickey for core from 139.178.89.65 port 58862 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:18.272749 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:18.276934 systemd-logind[1485]: New session 6 of user core. May 8 00:25:18.283591 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:25:18.786269 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:25:18.786697 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:18.790155 sudo[1852]: pam_unix(sudo:session): session closed for user root May 8 00:25:18.794792 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:25:18.795146 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:18.807845 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:25:18.831393 augenrules[1874]: No rules May 8 00:25:18.832911 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:25:18.833155 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:25:18.834320 sudo[1851]: pam_unix(sudo:session): session closed for user root May 8 00:25:18.991674 sshd[1850]: Connection closed by 139.178.89.65 port 58862 May 8 00:25:18.992219 sshd-session[1848]: pam_unix(sshd:session): session closed for user core May 8 00:25:18.996035 systemd[1]: sshd@5-65.21.107.133:22-139.178.89.65:58862.service: Deactivated successfully. May 8 00:25:18.998573 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:25:18.999411 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. May 8 00:25:19.000603 systemd-logind[1485]: Removed session 6. May 8 00:25:19.161696 systemd[1]: Started sshd@6-65.21.107.133:22-139.178.89.65:58878.service - OpenSSH per-connection server daemon (139.178.89.65:58878). May 8 00:25:20.125699 sshd[1883]: Accepted publickey for core from 139.178.89.65 port 58878 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:25:20.126858 sshd-session[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:20.131812 systemd-logind[1485]: New session 7 of user core. May 8 00:25:20.142668 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:25:20.641527 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:25:20.641830 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:20.877646 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:25:20.878744 (dockerd)[1904]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:25:21.110252 dockerd[1904]: time="2025-05-08T00:25:21.110081396Z" level=info msg="Starting up" May 8 00:25:21.160994 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2858188734-merged.mount: Deactivated successfully. May 8 00:25:21.188854 dockerd[1904]: time="2025-05-08T00:25:21.188774847Z" level=info msg="Loading containers: start." May 8 00:25:21.307478 kernel: Initializing XFRM netlink socket May 8 00:25:21.376263 systemd-networkd[1410]: docker0: Link UP May 8 00:25:21.404601 dockerd[1904]: time="2025-05-08T00:25:21.404557471Z" level=info msg="Loading containers: done." May 8 00:25:21.416964 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4058458414-merged.mount: Deactivated successfully. May 8 00:25:21.420227 dockerd[1904]: time="2025-05-08T00:25:21.420195644Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:25:21.420284 dockerd[1904]: time="2025-05-08T00:25:21.420254053Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:25:21.420345 dockerd[1904]: time="2025-05-08T00:25:21.420325117Z" level=info msg="Daemon has completed initialization" May 8 00:25:21.445092 dockerd[1904]: time="2025-05-08T00:25:21.444434938Z" level=info msg="API listen on /run/docker.sock" May 8 00:25:21.444787 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:25:22.457377 containerd[1506]: time="2025-05-08T00:25:22.457307222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:25:22.963375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055523069.mount: Deactivated successfully. May 8 00:25:24.848559 containerd[1506]: time="2025-05-08T00:25:24.848496954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:24.849422 containerd[1506]: time="2025-05-08T00:25:24.849385449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682973" May 8 00:25:24.850069 containerd[1506]: time="2025-05-08T00:25:24.850032714Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:24.851986 containerd[1506]: time="2025-05-08T00:25:24.851954748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:24.852974 containerd[1506]: time="2025-05-08T00:25:24.852785357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.395435755s" May 8 00:25:24.852974 containerd[1506]: time="2025-05-08T00:25:24.852809752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:25:24.853532 containerd[1506]: time="2025-05-08T00:25:24.853494346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:25:25.004638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 8 00:25:25.009662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:25.080736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:25.083292 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:25.113480 kubelet[2153]: E0508 00:25:25.113319 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:25.115475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:25.115608 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:25.116024 systemd[1]: kubelet.service: Consumed 98ms CPU time, 103.8M memory peak. May 8 00:25:26.757141 containerd[1506]: time="2025-05-08T00:25:26.757092054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:26.758134 containerd[1506]: time="2025-05-08T00:25:26.758111466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779611" May 8 00:25:26.759093 containerd[1506]: time="2025-05-08T00:25:26.759070094Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:26.761593 containerd[1506]: time="2025-05-08T00:25:26.761568730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:26.762334 containerd[1506]: time="2025-05-08T00:25:26.762309219Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.908781049s" May 8 00:25:26.762392 containerd[1506]: time="2025-05-08T00:25:26.762335038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:25:26.763489 containerd[1506]: time="2025-05-08T00:25:26.763451191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:25:28.157401 containerd[1506]: time="2025-05-08T00:25:28.157335219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:28.158387 containerd[1506]: time="2025-05-08T00:25:28.158264482Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169960" May 8 00:25:28.159096 containerd[1506]: time="2025-05-08T00:25:28.159058576Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:28.161127 containerd[1506]: time="2025-05-08T00:25:28.161081245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:28.162139 containerd[1506]: time="2025-05-08T00:25:28.161797866Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.398231337s" May 8 00:25:28.162139 containerd[1506]: time="2025-05-08T00:25:28.161821839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:25:28.162326 containerd[1506]: time="2025-05-08T00:25:28.162299921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:25:29.044123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890608435.mount: Deactivated successfully. May 8 00:25:29.310309 containerd[1506]: time="2025-05-08T00:25:29.310210260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:29.311084 containerd[1506]: time="2025-05-08T00:25:29.311007519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917884" May 8 00:25:29.311806 containerd[1506]: time="2025-05-08T00:25:29.311750470Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:29.313151 containerd[1506]: time="2025-05-08T00:25:29.313117331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:29.313549 containerd[1506]: time="2025-05-08T00:25:29.313523860Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.151199836s" May 8 00:25:29.313591 containerd[1506]: time="2025-05-08T00:25:29.313550569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:25:29.314258 containerd[1506]: time="2025-05-08T00:25:29.314243307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:25:29.776307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130207170.mount: Deactivated successfully. May 8 00:25:30.522166 containerd[1506]: time="2025-05-08T00:25:30.522108952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.523393 containerd[1506]: time="2025-05-08T00:25:30.523359058Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 8 00:25:30.524510 containerd[1506]: time="2025-05-08T00:25:30.524476630Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.526709 containerd[1506]: time="2025-05-08T00:25:30.526677540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.527762 containerd[1506]: time="2025-05-08T00:25:30.527650346Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.213323714s" May 8 00:25:30.527762 containerd[1506]: time="2025-05-08T00:25:30.527675803Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:25:30.528587 containerd[1506]: time="2025-05-08T00:25:30.528543383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:25:30.961681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413912116.mount: Deactivated successfully. May 8 00:25:30.965600 containerd[1506]: time="2025-05-08T00:25:30.965547182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.966326 containerd[1506]: time="2025-05-08T00:25:30.966283291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 8 00:25:30.966952 containerd[1506]: time="2025-05-08T00:25:30.966906320Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.971487 containerd[1506]: time="2025-05-08T00:25:30.970840779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:30.972842 containerd[1506]: time="2025-05-08T00:25:30.972795255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.226516ms" May 8 00:25:30.972842 containerd[1506]: time="2025-05-08T00:25:30.972823858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:25:30.973279 containerd[1506]: time="2025-05-08T00:25:30.973256325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:25:31.492238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759752523.mount: Deactivated successfully. May 8 00:25:33.575295 containerd[1506]: time="2025-05-08T00:25:33.575235049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:33.576263 containerd[1506]: time="2025-05-08T00:25:33.576228846Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" May 8 00:25:33.576977 containerd[1506]: time="2025-05-08T00:25:33.576943015Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:33.579119 containerd[1506]: time="2025-05-08T00:25:33.579085242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:33.580106 containerd[1506]: time="2025-05-08T00:25:33.580003359Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.606662477s" May 8 00:25:33.580106 containerd[1506]: time="2025-05-08T00:25:33.580025329Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:25:35.254604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 8 00:25:35.259682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:35.342565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:35.344912 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:35.386487 kubelet[2316]: E0508 00:25:35.385912 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:35.387897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:35.388018 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:35.388247 systemd[1]: kubelet.service: Consumed 96ms CPU time, 101.4M memory peak. May 8 00:25:35.967016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:35.967193 systemd[1]: kubelet.service: Consumed 96ms CPU time, 101.4M memory peak. May 8 00:25:35.983829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:36.007044 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-7.scope)... May 8 00:25:36.007145 systemd[1]: Reloading... May 8 00:25:36.090593 zram_generator::config[2375]: No configuration found. May 8 00:25:36.176735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:25:36.260808 systemd[1]: Reloading finished in 253 ms. May 8 00:25:36.298168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:36.302278 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:25:36.307589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:36.308722 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:25:36.308923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:36.308972 systemd[1]: kubelet.service: Consumed 68ms CPU time, 97.2M memory peak. May 8 00:25:36.311715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:36.384098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:36.387265 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:25:36.421085 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:25:36.421085 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:25:36.421085 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:25:36.421543 kubelet[2436]: I0508 00:25:36.421091 2436 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:25:36.791481 kubelet[2436]: I0508 00:25:36.790546 2436 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:25:36.791481 kubelet[2436]: I0508 00:25:36.790571 2436 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:25:36.791481 kubelet[2436]: I0508 00:25:36.790943 2436 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:25:36.815725 kubelet[2436]: I0508 00:25:36.815706 2436 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:25:36.820440 kubelet[2436]: E0508 00:25:36.820276 2436 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.21.107.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:36.831377 kubelet[2436]: E0508 00:25:36.831333 2436 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:25:36.831377 kubelet[2436]: I0508 00:25:36.831367 2436 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:25:36.835713 kubelet[2436]: I0508 00:25:36.835691 2436 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:25:36.838894 kubelet[2436]: I0508 00:25:36.838849 2436 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:25:36.839036 kubelet[2436]: I0508 00:25:36.838886 2436 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-afb3d0be43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:25:36.840389 kubelet[2436]: I0508 00:25:36.840362 2436 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:25:36.840389 kubelet[2436]: I0508 00:25:36.840380 2436 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:25:36.840509 kubelet[2436]: I0508 00:25:36.840484 2436 state_mem.go:36] "Initialized new in-memory state store" May 8 00:25:36.843345 kubelet[2436]: I0508 00:25:36.843250 2436 kubelet.go:446] "Attempting to sync node with API server" May 8 00:25:36.843345 kubelet[2436]: I0508 00:25:36.843271 2436 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:25:36.843345 kubelet[2436]: I0508 00:25:36.843290 2436 kubelet.go:352] "Adding apiserver pod source" May 8 00:25:36.843345 kubelet[2436]: I0508 00:25:36.843307 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:25:36.848746 kubelet[2436]: I0508 00:25:36.848723 2436 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:25:36.852475 kubelet[2436]: W0508 00:25:36.852199 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.107.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-afb3d0be43&limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:36.852475 kubelet[2436]: E0508 00:25:36.852254 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.107.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-afb3d0be43&limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:36.852475 kubelet[2436]: I0508 00:25:36.852360 2436 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:25:36.853210 kubelet[2436]: W0508 00:25:36.852883 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:25:36.855321 kubelet[2436]: I0508 00:25:36.855132 2436 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:25:36.855321 kubelet[2436]: I0508 00:25:36.855158 2436 server.go:1287] "Started kubelet" May 8 00:25:36.857071 kubelet[2436]: W0508 00:25:36.856781 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.21.107.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:36.857071 kubelet[2436]: E0508 00:25:36.856829 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.21.107.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:36.857071 kubelet[2436]: I0508 00:25:36.856856 2436 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:25:36.862008 kubelet[2436]: I0508 00:25:36.861716 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:25:36.862008 kubelet[2436]: I0508 00:25:36.861848 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:25:36.862008 kubelet[2436]: I0508 00:25:36.861950 2436 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:25:36.866859 kubelet[2436]: E0508 00:25:36.864420 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.21.107.133:6443/api/v1/namespaces/default/events\": dial tcp 65.21.107.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-n-afb3d0be43.183d6598aa3c6fe7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-afb3d0be43,UID:ci-4230-1-1-n-afb3d0be43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-afb3d0be43,},FirstTimestamp:2025-05-08 00:25:36.855142375 +0000 UTC m=+0.464885292,LastTimestamp:2025-05-08 00:25:36.855142375 +0000 UTC m=+0.464885292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-afb3d0be43,}" May 8 00:25:36.867894 kubelet[2436]: I0508 00:25:36.867882 2436 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:25:36.868149 kubelet[2436]: E0508 00:25:36.868133 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:36.868279 kubelet[2436]: I0508 00:25:36.868258 2436 server.go:490] "Adding debug handlers to kubelet server" May 8 00:25:36.869006 kubelet[2436]: I0508 00:25:36.868980 2436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:25:36.871263 kubelet[2436]: E0508 00:25:36.870779 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.107.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-afb3d0be43?timeout=10s\": dial tcp 65.21.107.133:6443: connect: connection refused" interval="200ms" May 8 00:25:36.871263 kubelet[2436]: I0508 00:25:36.870965 2436 factory.go:221] Registration of the systemd container factory successfully May 8 00:25:36.871263 kubelet[2436]: I0508 00:25:36.871035 2436 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:25:36.871263 kubelet[2436]: I0508 00:25:36.871058 2436 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:25:36.871263 kubelet[2436]: I0508 00:25:36.871119 2436 reconciler.go:26] "Reconciler: start to sync state" May 8 00:25:36.874120 kubelet[2436]: W0508 00:25:36.873282 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.107.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:36.874120 kubelet[2436]: E0508 00:25:36.873331 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.107.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:36.874120 kubelet[2436]: I0508 00:25:36.873407 2436 factory.go:221] Registration of the containerd container factory successfully May 8 00:25:36.879983 kubelet[2436]: I0508 00:25:36.879646 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:25:36.880617 kubelet[2436]: I0508 00:25:36.880568 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:25:36.880617 kubelet[2436]: I0508 00:25:36.880586 2436 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:25:36.880675 kubelet[2436]: I0508 00:25:36.880622 2436 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:25:36.880675 kubelet[2436]: I0508 00:25:36.880630 2436 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:25:36.880675 kubelet[2436]: E0508 00:25:36.880662 2436 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:25:36.885776 kubelet[2436]: W0508 00:25:36.885739 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.21.107.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:36.885821 kubelet[2436]: E0508 00:25:36.885776 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.21.107.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:36.903629 kubelet[2436]: I0508 00:25:36.903605 2436 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:25:36.903675 kubelet[2436]: I0508 00:25:36.903634 2436 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:25:36.903675 kubelet[2436]: I0508 00:25:36.903647 2436 state_mem.go:36] "Initialized new in-memory state store" May 8 00:25:36.905407 kubelet[2436]: I0508 00:25:36.905375 2436 policy_none.go:49] "None policy: Start" May 8 00:25:36.905407 kubelet[2436]: I0508 00:25:36.905394 2436 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:25:36.905407 kubelet[2436]: I0508 00:25:36.905402 2436 state_mem.go:35] "Initializing new in-memory state store" May 8 00:25:36.910143 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:25:36.918176 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:25:36.920886 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:25:36.930433 kubelet[2436]: I0508 00:25:36.930034 2436 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:25:36.930433 kubelet[2436]: I0508 00:25:36.930178 2436 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:25:36.930433 kubelet[2436]: I0508 00:25:36.930190 2436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:25:36.930433 kubelet[2436]: I0508 00:25:36.930388 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:25:36.931637 kubelet[2436]: E0508 00:25:36.931626 2436 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:25:36.931739 kubelet[2436]: E0508 00:25:36.931724 2436 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:36.989537 systemd[1]: Created slice kubepods-burstable-poddb364e340ed2cf006efe909f2b05da70.slice - libcontainer container kubepods-burstable-poddb364e340ed2cf006efe909f2b05da70.slice. May 8 00:25:37.001165 kubelet[2436]: E0508 00:25:37.001132 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.004751 systemd[1]: Created slice kubepods-burstable-podc73c8e5d5bb8629cab27ff61146417a2.slice - libcontainer container kubepods-burstable-podc73c8e5d5bb8629cab27ff61146417a2.slice. May 8 00:25:37.006356 kubelet[2436]: E0508 00:25:37.006334 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.016563 systemd[1]: Created slice kubepods-burstable-pod94e5cfb59787501b60dd003230869271.slice - libcontainer container kubepods-burstable-pod94e5cfb59787501b60dd003230869271.slice. May 8 00:25:37.018284 kubelet[2436]: E0508 00:25:37.018232 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.031711 kubelet[2436]: I0508 00:25:37.031684 2436 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.032120 kubelet[2436]: E0508 00:25:37.032083 2436 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://65.21.107.133:6443/api/v1/nodes\": dial tcp 65.21.107.133:6443: connect: connection refused" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.071769 kubelet[2436]: E0508 00:25:37.071627 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.107.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-afb3d0be43?timeout=10s\": dial tcp 65.21.107.133:6443: connect: connection refused" interval="400ms" May 8 00:25:37.073203 kubelet[2436]: I0508 00:25:37.072857 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073203 kubelet[2436]: I0508 00:25:37.072912 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073203 kubelet[2436]: I0508 00:25:37.072945 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073203 kubelet[2436]: I0508 00:25:37.072980 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94e5cfb59787501b60dd003230869271-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-afb3d0be43\" (UID: \"94e5cfb59787501b60dd003230869271\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073203 kubelet[2436]: I0508 00:25:37.073006 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073530 kubelet[2436]: I0508 00:25:37.073027 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073530 kubelet[2436]: I0508 00:25:37.073056 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073530 kubelet[2436]: I0508 00:25:37.073115 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.073530 kubelet[2436]: I0508 00:25:37.073147 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.234491 kubelet[2436]: I0508 00:25:37.234431 2436 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.234865 kubelet[2436]: E0508 00:25:37.234816 2436 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://65.21.107.133:6443/api/v1/nodes\": dial tcp 65.21.107.133:6443: connect: connection refused" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.302645 containerd[1506]: time="2025-05-08T00:25:37.302577457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-afb3d0be43,Uid:db364e340ed2cf006efe909f2b05da70,Namespace:kube-system,Attempt:0,}" May 8 00:25:37.307144 containerd[1506]: time="2025-05-08T00:25:37.307109132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-afb3d0be43,Uid:c73c8e5d5bb8629cab27ff61146417a2,Namespace:kube-system,Attempt:0,}" May 8 00:25:37.322217 containerd[1506]: time="2025-05-08T00:25:37.321895731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-afb3d0be43,Uid:94e5cfb59787501b60dd003230869271,Namespace:kube-system,Attempt:0,}" May 8 00:25:37.472756 kubelet[2436]: E0508 00:25:37.472710 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.107.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-afb3d0be43?timeout=10s\": dial tcp 65.21.107.133:6443: connect: connection refused" interval="800ms" May 8 00:25:37.637373 kubelet[2436]: I0508 00:25:37.637244 2436 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.637704 kubelet[2436]: E0508 00:25:37.637614 2436 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://65.21.107.133:6443/api/v1/nodes\": dial tcp 65.21.107.133:6443: connect: connection refused" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:37.724633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788028687.mount: Deactivated successfully. May 8 00:25:37.729948 containerd[1506]: time="2025-05-08T00:25:37.729906437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:25:37.731354 containerd[1506]: time="2025-05-08T00:25:37.731320153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:25:37.732667 containerd[1506]: time="2025-05-08T00:25:37.732630807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 8 00:25:37.733216 containerd[1506]: time="2025-05-08T00:25:37.733177440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:25:37.734814 containerd[1506]: time="2025-05-08T00:25:37.734735994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:25:37.736120 containerd[1506]: time="2025-05-08T00:25:37.736000904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:25:37.736120 containerd[1506]: time="2025-05-08T00:25:37.736067667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:25:37.739170 containerd[1506]: time="2025-05-08T00:25:37.739116558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:25:37.741131 containerd[1506]: time="2025-05-08T00:25:37.740971551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 433.798991ms" May 8 00:25:37.742755 containerd[1506]: time="2025-05-08T00:25:37.742711080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 420.718458ms" May 8 00:25:37.745644 containerd[1506]: time="2025-05-08T00:25:37.745607038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 441.446812ms" May 8 00:25:37.846564 containerd[1506]: time="2025-05-08T00:25:37.846350404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:37.846564 containerd[1506]: time="2025-05-08T00:25:37.846407730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:37.846564 containerd[1506]: time="2025-05-08T00:25:37.846421426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.846564 containerd[1506]: time="2025-05-08T00:25:37.846520078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.849911 containerd[1506]: time="2025-05-08T00:25:37.849749173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:37.849911 containerd[1506]: time="2025-05-08T00:25:37.849785280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:37.849911 containerd[1506]: time="2025-05-08T00:25:37.849797963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.849998 containerd[1506]: time="2025-05-08T00:25:37.849881067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.853919 containerd[1506]: time="2025-05-08T00:25:37.853857635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:37.855524 containerd[1506]: time="2025-05-08T00:25:37.854662203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:37.855524 containerd[1506]: time="2025-05-08T00:25:37.854678062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.855524 containerd[1506]: time="2025-05-08T00:25:37.854729928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:37.868978 kubelet[2436]: W0508 00:25:37.868812 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.107.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-afb3d0be43&limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:37.868978 kubelet[2436]: E0508 00:25:37.868959 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.107.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-afb3d0be43&limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:37.870571 systemd[1]: Started cri-containerd-bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16.scope - libcontainer container bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16. May 8 00:25:37.874230 systemd[1]: Started cri-containerd-bee248eb87e261058a1a7e9cd8cea79029c79132632e6083c7b0b649e1145f75.scope - libcontainer container bee248eb87e261058a1a7e9cd8cea79029c79132632e6083c7b0b649e1145f75. May 8 00:25:37.878139 systemd[1]: Started cri-containerd-03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe.scope - libcontainer container 03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe. May 8 00:25:37.915535 containerd[1506]: time="2025-05-08T00:25:37.914925539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-afb3d0be43,Uid:db364e340ed2cf006efe909f2b05da70,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee248eb87e261058a1a7e9cd8cea79029c79132632e6083c7b0b649e1145f75\"" May 8 00:25:37.921025 containerd[1506]: time="2025-05-08T00:25:37.919538254Z" level=info msg="CreateContainer within sandbox \"bee248eb87e261058a1a7e9cd8cea79029c79132632e6083c7b0b649e1145f75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:25:37.923733 containerd[1506]: time="2025-05-08T00:25:37.923714671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-afb3d0be43,Uid:94e5cfb59787501b60dd003230869271,Namespace:kube-system,Attempt:0,} returns sandbox id \"03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe\"" May 8 00:25:37.925878 containerd[1506]: time="2025-05-08T00:25:37.925793909Z" level=info msg="CreateContainer within sandbox \"03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:25:37.939816 containerd[1506]: time="2025-05-08T00:25:37.939773535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-afb3d0be43,Uid:c73c8e5d5bb8629cab27ff61146417a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16\"" May 8 00:25:37.942338 containerd[1506]: time="2025-05-08T00:25:37.942307114Z" level=info msg="CreateContainer within sandbox \"bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:25:37.944704 containerd[1506]: time="2025-05-08T00:25:37.944673662Z" level=info msg="CreateContainer within sandbox \"03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675\"" May 8 00:25:37.946012 containerd[1506]: time="2025-05-08T00:25:37.945062682Z" level=info msg="StartContainer for \"0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675\"" May 8 00:25:37.946422 containerd[1506]: time="2025-05-08T00:25:37.946389667Z" level=info msg="CreateContainer within sandbox \"bee248eb87e261058a1a7e9cd8cea79029c79132632e6083c7b0b649e1145f75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf5383eaf2f5b26c954bafaaf87ddcbdb27fedee19cf1b1297397fe361412cd7\"" May 8 00:25:37.948156 containerd[1506]: time="2025-05-08T00:25:37.948107846Z" level=info msg="StartContainer for \"cf5383eaf2f5b26c954bafaaf87ddcbdb27fedee19cf1b1297397fe361412cd7\"" May 8 00:25:37.953811 containerd[1506]: time="2025-05-08T00:25:37.953743684Z" level=info msg="CreateContainer within sandbox \"bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326\"" May 8 00:25:37.954029 containerd[1506]: time="2025-05-08T00:25:37.954008796Z" level=info msg="StartContainer for \"a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326\"" May 8 00:25:37.973576 systemd[1]: Started cri-containerd-cf5383eaf2f5b26c954bafaaf87ddcbdb27fedee19cf1b1297397fe361412cd7.scope - libcontainer container cf5383eaf2f5b26c954bafaaf87ddcbdb27fedee19cf1b1297397fe361412cd7. May 8 00:25:37.982578 systemd[1]: Started cri-containerd-a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326.scope - libcontainer container a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326. May 8 00:25:37.986227 systemd[1]: Started cri-containerd-0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675.scope - libcontainer container 0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675. May 8 00:25:38.030330 containerd[1506]: time="2025-05-08T00:25:38.029817153Z" level=info msg="StartContainer for \"cf5383eaf2f5b26c954bafaaf87ddcbdb27fedee19cf1b1297397fe361412cd7\" returns successfully" May 8 00:25:38.035880 containerd[1506]: time="2025-05-08T00:25:38.035806139Z" level=info msg="StartContainer for \"a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326\" returns successfully" May 8 00:25:38.044692 containerd[1506]: time="2025-05-08T00:25:38.044648004Z" level=info msg="StartContainer for \"0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675\" returns successfully" May 8 00:25:38.117434 kubelet[2436]: W0508 00:25:38.117380 2436 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.107.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.107.133:6443: connect: connection refused May 8 00:25:38.117434 kubelet[2436]: E0508 00:25:38.117443 2436 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.107.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.107.133:6443: connect: connection refused" logger="UnhandledError" May 8 00:25:38.274594 kubelet[2436]: E0508 00:25:38.274032 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.107.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-afb3d0be43?timeout=10s\": dial tcp 65.21.107.133:6443: connect: connection refused" interval="1.6s" May 8 00:25:38.441681 kubelet[2436]: I0508 00:25:38.441651 2436 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:38.908040 kubelet[2436]: E0508 00:25:38.907568 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:38.908040 kubelet[2436]: E0508 00:25:38.907781 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:38.910623 kubelet[2436]: E0508 00:25:38.910324 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:39.564680 kubelet[2436]: I0508 00:25:39.564646 2436 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:39.564680 kubelet[2436]: E0508 00:25:39.564676 2436 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230-1-1-n-afb3d0be43\": node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:39.569196 kubelet[2436]: E0508 00:25:39.568529 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:39.669173 kubelet[2436]: E0508 00:25:39.669127 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:39.770043 kubelet[2436]: E0508 00:25:39.769978 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:39.870635 kubelet[2436]: E0508 00:25:39.870482 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:39.913866 kubelet[2436]: E0508 00:25:39.913823 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:39.915092 kubelet[2436]: E0508 00:25:39.914634 2436 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:39.971535 kubelet[2436]: E0508 00:25:39.971483 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:40.072326 kubelet[2436]: E0508 00:25:40.072258 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:40.173524 kubelet[2436]: E0508 00:25:40.173333 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:40.269285 kubelet[2436]: I0508 00:25:40.269205 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.278536 kubelet[2436]: E0508 00:25:40.278436 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.278536 kubelet[2436]: I0508 00:25:40.278510 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.280861 kubelet[2436]: E0508 00:25:40.280801 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.280861 kubelet[2436]: I0508 00:25:40.280841 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.282914 kubelet[2436]: E0508 00:25:40.282866 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-afb3d0be43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:40.859266 kubelet[2436]: I0508 00:25:40.859201 2436 apiserver.go:52] "Watching apiserver" May 8 00:25:40.872281 kubelet[2436]: I0508 00:25:40.872146 2436 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:25:41.353340 systemd[1]: Reload requested from client PID 2710 ('systemctl') (unit session-7.scope)... May 8 00:25:41.353364 systemd[1]: Reloading... May 8 00:25:41.428537 zram_generator::config[2754]: No configuration found. May 8 00:25:41.515402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:25:41.607646 systemd[1]: Reloading finished in 253 ms. May 8 00:25:41.635787 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:41.655663 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:25:41.655910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:41.655959 systemd[1]: kubelet.service: Consumed 762ms CPU time, 123.8M memory peak. May 8 00:25:41.661039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:41.743597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:41.746696 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:25:41.792997 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:25:41.793314 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:25:41.793356 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:25:41.793481 kubelet[2806]: I0508 00:25:41.793439 2806 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:25:41.798519 kubelet[2806]: I0508 00:25:41.798495 2806 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:25:41.798519 kubelet[2806]: I0508 00:25:41.798511 2806 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:25:41.799569 kubelet[2806]: I0508 00:25:41.798962 2806 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:25:41.801469 kubelet[2806]: I0508 00:25:41.801438 2806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:25:41.807223 kubelet[2806]: I0508 00:25:41.807075 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:25:41.809068 kubelet[2806]: E0508 00:25:41.809036 2806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:25:41.809068 kubelet[2806]: I0508 00:25:41.809065 2806 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:25:41.811387 kubelet[2806]: I0508 00:25:41.811369 2806 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:25:41.811567 kubelet[2806]: I0508 00:25:41.811538 2806 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:25:41.811701 kubelet[2806]: I0508 00:25:41.811560 2806 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-afb3d0be43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:25:41.811701 kubelet[2806]: I0508 00:25:41.811683 2806 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:25:41.811701 kubelet[2806]: I0508 00:25:41.811691 2806 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:25:41.811827 kubelet[2806]: I0508 00:25:41.811720 2806 state_mem.go:36] "Initialized new in-memory state store" May 8 00:25:41.811885 kubelet[2806]: I0508 00:25:41.811856 2806 kubelet.go:446] "Attempting to sync node with API server" May 8 00:25:41.811885 kubelet[2806]: I0508 00:25:41.811877 2806 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:25:41.811931 kubelet[2806]: I0508 00:25:41.811890 2806 kubelet.go:352] "Adding apiserver pod source" May 8 00:25:41.814870 kubelet[2806]: I0508 00:25:41.814484 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:25:41.819535 kubelet[2806]: I0508 00:25:41.818615 2806 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:25:41.819535 kubelet[2806]: I0508 00:25:41.818900 2806 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:25:41.819535 kubelet[2806]: I0508 00:25:41.819260 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:25:41.819535 kubelet[2806]: I0508 00:25:41.819279 2806 server.go:1287] "Started kubelet" May 8 00:25:41.820752 kubelet[2806]: I0508 00:25:41.820740 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:25:41.827837 kubelet[2806]: I0508 00:25:41.827810 2806 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:25:41.828591 kubelet[2806]: I0508 00:25:41.828575 2806 server.go:490] "Adding debug handlers to kubelet server" May 8 00:25:41.829498 kubelet[2806]: I0508 00:25:41.829121 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:25:41.829498 kubelet[2806]: I0508 00:25:41.829296 2806 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:25:41.829498 kubelet[2806]: I0508 00:25:41.829445 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:25:41.830726 kubelet[2806]: I0508 00:25:41.830642 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:25:41.830872 kubelet[2806]: E0508 00:25:41.830774 2806 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-afb3d0be43\" not found" May 8 00:25:41.832933 kubelet[2806]: I0508 00:25:41.832592 2806 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:25:41.832933 kubelet[2806]: I0508 00:25:41.832672 2806 reconciler.go:26] "Reconciler: start to sync state" May 8 00:25:41.834978 kubelet[2806]: I0508 00:25:41.834937 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:25:41.835753 kubelet[2806]: I0508 00:25:41.835684 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:25:41.836827 kubelet[2806]: I0508 00:25:41.836797 2806 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:25:41.836872 kubelet[2806]: I0508 00:25:41.836830 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:25:41.836872 kubelet[2806]: I0508 00:25:41.836837 2806 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:25:41.836911 kubelet[2806]: E0508 00:25:41.836870 2806 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:25:41.839256 kubelet[2806]: I0508 00:25:41.839225 2806 factory.go:221] Registration of the containerd container factory successfully May 8 00:25:41.839256 kubelet[2806]: I0508 00:25:41.839241 2806 factory.go:221] Registration of the systemd container factory successfully May 8 00:25:41.839342 kubelet[2806]: I0508 00:25:41.839295 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:25:41.882622 kubelet[2806]: I0508 00:25:41.882551 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:25:41.882844 kubelet[2806]: I0508 00:25:41.882724 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:25:41.882957 kubelet[2806]: I0508 00:25:41.882946 2806 state_mem.go:36] "Initialized new in-memory state store" May 8 00:25:41.883123 kubelet[2806]: I0508 00:25:41.883112 2806 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:25:41.883180 kubelet[2806]: I0508 00:25:41.883162 2806 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:25:41.883240 kubelet[2806]: I0508 00:25:41.883233 2806 policy_none.go:49] "None policy: Start" May 8 00:25:41.883285 kubelet[2806]: I0508 00:25:41.883279 2806 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:25:41.883324 kubelet[2806]: I0508 00:25:41.883319 2806 state_mem.go:35] "Initializing new in-memory state store" May 8 00:25:41.883444 kubelet[2806]: I0508 00:25:41.883434 2806 state_mem.go:75] "Updated machine memory state" May 8 00:25:41.886818 kubelet[2806]: I0508 00:25:41.886798 2806 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:25:41.886927 kubelet[2806]: I0508 00:25:41.886910 2806 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:25:41.886952 kubelet[2806]: I0508 00:25:41.886923 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:25:41.887289 kubelet[2806]: I0508 00:25:41.887279 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:25:41.888882 kubelet[2806]: E0508 00:25:41.888815 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:25:41.937823 kubelet[2806]: I0508 00:25:41.937754 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:41.940835 kubelet[2806]: I0508 00:25:41.940646 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:41.940835 kubelet[2806]: I0508 00:25:41.940691 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:41.989251 kubelet[2806]: I0508 00:25:41.989231 2806 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:41.994853 kubelet[2806]: I0508 00:25:41.994830 2806 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:41.994949 kubelet[2806]: I0508 00:25:41.994882 2806 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134145 kubelet[2806]: I0508 00:25:42.134045 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134145 kubelet[2806]: I0508 00:25:42.134079 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134145 kubelet[2806]: I0508 00:25:42.134101 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134145 kubelet[2806]: I0508 00:25:42.134117 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db364e340ed2cf006efe909f2b05da70-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" (UID: \"db364e340ed2cf006efe909f2b05da70\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134145 kubelet[2806]: I0508 00:25:42.134136 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134348 kubelet[2806]: I0508 00:25:42.134151 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134348 kubelet[2806]: I0508 00:25:42.134181 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94e5cfb59787501b60dd003230869271-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-afb3d0be43\" (UID: \"94e5cfb59787501b60dd003230869271\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134348 kubelet[2806]: I0508 00:25:42.134206 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.134348 kubelet[2806]: I0508 00:25:42.134225 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c73c8e5d5bb8629cab27ff61146417a2-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" (UID: \"c73c8e5d5bb8629cab27ff61146417a2\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.345904 sudo[2838]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:25:42.346175 sudo[2838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:25:42.815100 kubelet[2806]: I0508 00:25:42.815068 2806 apiserver.go:52] "Watching apiserver" May 8 00:25:42.818116 sudo[2838]: pam_unix(sudo:session): session closed for user root May 8 00:25:42.833075 kubelet[2806]: I0508 00:25:42.833048 2806 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:25:42.870709 kubelet[2806]: I0508 00:25:42.870528 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.871173 kubelet[2806]: I0508 00:25:42.870909 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.871395 kubelet[2806]: I0508 00:25:42.871339 2806 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.883880 kubelet[2806]: E0508 00:25:42.883592 2806 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-afb3d0be43\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.883880 kubelet[2806]: E0508 00:25:42.883726 2806 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-afb3d0be43\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.883880 kubelet[2806]: E0508 00:25:42.883807 2806 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-afb3d0be43\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" May 8 00:25:42.899401 kubelet[2806]: I0508 00:25:42.899026 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" podStartSLOduration=1.899016378 podStartE2EDuration="1.899016378s" podCreationTimestamp="2025-05-08 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:25:42.898860308 +0000 UTC m=+1.147747692" watchObservedRunningTime="2025-05-08 00:25:42.899016378 +0000 UTC m=+1.147903761" May 8 00:25:42.912805 kubelet[2806]: I0508 00:25:42.912777 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-n-afb3d0be43" podStartSLOduration=1.912765661 podStartE2EDuration="1.912765661s" podCreationTimestamp="2025-05-08 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:25:42.904720139 +0000 UTC m=+1.153607522" watchObservedRunningTime="2025-05-08 00:25:42.912765661 +0000 UTC m=+1.161653044" May 8 00:25:44.137121 sudo[1886]: pam_unix(sudo:session): session closed for user root May 8 00:25:44.293862 sshd[1885]: Connection closed by 139.178.89.65 port 58878 May 8 00:25:44.295143 sshd-session[1883]: pam_unix(sshd:session): session closed for user core May 8 00:25:44.299694 systemd[1]: sshd@6-65.21.107.133:22-139.178.89.65:58878.service: Deactivated successfully. May 8 00:25:44.304079 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:25:44.304414 systemd[1]: session-7.scope: Consumed 3.641s CPU time, 211.1M memory peak. May 8 00:25:44.306733 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. May 8 00:25:44.308785 systemd-logind[1485]: Removed session 7. May 8 00:25:47.286357 kubelet[2806]: I0508 00:25:47.286278 2806 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:25:47.286929 containerd[1506]: time="2025-05-08T00:25:47.286800630Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:25:47.287302 kubelet[2806]: I0508 00:25:47.287264 2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:25:48.318398 kubelet[2806]: I0508 00:25:48.318155 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-n-afb3d0be43" podStartSLOduration=7.318131346 podStartE2EDuration="7.318131346s" podCreationTimestamp="2025-05-08 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:25:42.913083319 +0000 UTC m=+1.161970702" watchObservedRunningTime="2025-05-08 00:25:48.318131346 +0000 UTC m=+6.567018759" May 8 00:25:48.340110 systemd[1]: Created slice kubepods-besteffort-pode1a44083_ebb2_428c_953e_0bc0be24b9f9.slice - libcontainer container kubepods-besteffort-pode1a44083_ebb2_428c_953e_0bc0be24b9f9.slice. May 8 00:25:48.354804 systemd[1]: Created slice kubepods-burstable-podca470337_5ff3_43a5_b7a0_3833479e3537.slice - libcontainer container kubepods-burstable-podca470337_5ff3_43a5_b7a0_3833479e3537.slice. May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375328 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zld\" (UniqueName: \"kubernetes.io/projected/e1a44083-ebb2-428c-953e-0bc0be24b9f9-kube-api-access-84zld\") pod \"kube-proxy-khvv5\" (UID: \"e1a44083-ebb2-428c-953e-0bc0be24b9f9\") " pod="kube-system/kube-proxy-khvv5" May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375363 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca470337-5ff3-43a5-b7a0-3833479e3537-clustermesh-secrets\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375378 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-hubble-tls\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375390 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-hostproc\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375403 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-etc-cni-netd\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375671 kubelet[2806]: I0508 00:25:48.375413 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-xtables-lock\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375932 kubelet[2806]: I0508 00:25:48.375425 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-net\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375932 kubelet[2806]: I0508 00:25:48.375439 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1a44083-ebb2-428c-953e-0bc0be24b9f9-kube-proxy\") pod \"kube-proxy-khvv5\" (UID: \"e1a44083-ebb2-428c-953e-0bc0be24b9f9\") " pod="kube-system/kube-proxy-khvv5" May 8 00:25:48.375932 kubelet[2806]: I0508 00:25:48.375449 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-cgroup\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375932 kubelet[2806]: I0508 00:25:48.375471 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-config-path\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.375932 kubelet[2806]: I0508 00:25:48.375482 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-kernel\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375496 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdwh\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-kube-api-access-dfdwh\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375509 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1a44083-ebb2-428c-953e-0bc0be24b9f9-xtables-lock\") pod \"kube-proxy-khvv5\" (UID: \"e1a44083-ebb2-428c-953e-0bc0be24b9f9\") " pod="kube-system/kube-proxy-khvv5" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375521 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-run\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375531 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cni-path\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375542 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-lib-modules\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.376038 kubelet[2806]: I0508 00:25:48.375552 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1a44083-ebb2-428c-953e-0bc0be24b9f9-lib-modules\") pod \"kube-proxy-khvv5\" (UID: \"e1a44083-ebb2-428c-953e-0bc0be24b9f9\") " pod="kube-system/kube-proxy-khvv5" May 8 00:25:48.376154 kubelet[2806]: I0508 00:25:48.375565 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-bpf-maps\") pod \"cilium-whph8\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " pod="kube-system/cilium-whph8" May 8 00:25:48.453184 systemd[1]: Created slice kubepods-besteffort-pod63c552a3_471d_4009_8ccd_5b4b6b57cbd2.slice - libcontainer container kubepods-besteffort-pod63c552a3_471d_4009_8ccd_5b4b6b57cbd2.slice. May 8 00:25:48.476420 kubelet[2806]: I0508 00:25:48.476367 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wjf4q\" (UID: \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\") " pod="kube-system/cilium-operator-6c4d7847fc-wjf4q" May 8 00:25:48.476550 kubelet[2806]: I0508 00:25:48.476442 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8j8m\" (UniqueName: \"kubernetes.io/projected/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-kube-api-access-k8j8m\") pod \"cilium-operator-6c4d7847fc-wjf4q\" (UID: \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\") " pod="kube-system/cilium-operator-6c4d7847fc-wjf4q" May 8 00:25:48.647601 containerd[1506]: time="2025-05-08T00:25:48.647228649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khvv5,Uid:e1a44083-ebb2-428c-953e-0bc0be24b9f9,Namespace:kube-system,Attempt:0,}" May 8 00:25:48.658418 containerd[1506]: time="2025-05-08T00:25:48.658364473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whph8,Uid:ca470337-5ff3-43a5-b7a0-3833479e3537,Namespace:kube-system,Attempt:0,}" May 8 00:25:48.689450 containerd[1506]: time="2025-05-08T00:25:48.689112820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:48.689450 containerd[1506]: time="2025-05-08T00:25:48.689204189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:48.689450 containerd[1506]: time="2025-05-08T00:25:48.689233474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.689450 containerd[1506]: time="2025-05-08T00:25:48.689319744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.703512 containerd[1506]: time="2025-05-08T00:25:48.703268671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:48.703512 containerd[1506]: time="2025-05-08T00:25:48.703319465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:48.703512 containerd[1506]: time="2025-05-08T00:25:48.703333672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.703512 containerd[1506]: time="2025-05-08T00:25:48.703405726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.715009 systemd[1]: Started cri-containerd-98f149225f42873f6f873a761de833f82db5192e0cb7950db2998286db3573bb.scope - libcontainer container 98f149225f42873f6f873a761de833f82db5192e0cb7950db2998286db3573bb. May 8 00:25:48.730558 systemd[1]: Started cri-containerd-5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e.scope - libcontainer container 5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e. May 8 00:25:48.750881 containerd[1506]: time="2025-05-08T00:25:48.750841523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khvv5,Uid:e1a44083-ebb2-428c-953e-0bc0be24b9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"98f149225f42873f6f873a761de833f82db5192e0cb7950db2998286db3573bb\"" May 8 00:25:48.755669 containerd[1506]: time="2025-05-08T00:25:48.755640994Z" level=info msg="CreateContainer within sandbox \"98f149225f42873f6f873a761de833f82db5192e0cb7950db2998286db3573bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:25:48.757317 containerd[1506]: time="2025-05-08T00:25:48.757290237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wjf4q,Uid:63c552a3-471d-4009-8ccd-5b4b6b57cbd2,Namespace:kube-system,Attempt:0,}" May 8 00:25:48.773607 containerd[1506]: time="2025-05-08T00:25:48.773575411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whph8,Uid:ca470337-5ff3-43a5-b7a0-3833479e3537,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\"" May 8 00:25:48.775913 containerd[1506]: time="2025-05-08T00:25:48.775873348Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:25:48.777641 containerd[1506]: time="2025-05-08T00:25:48.777624318Z" level=info msg="CreateContainer within sandbox \"98f149225f42873f6f873a761de833f82db5192e0cb7950db2998286db3573bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"316f0c21c435bdc8d0b8958359b7ee964996585a0d859655edf308526c7c3666\"" May 8 00:25:48.778214 containerd[1506]: time="2025-05-08T00:25:48.778132111Z" level=info msg="StartContainer for \"316f0c21c435bdc8d0b8958359b7ee964996585a0d859655edf308526c7c3666\"" May 8 00:25:48.795888 containerd[1506]: time="2025-05-08T00:25:48.795686912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:25:48.795888 containerd[1506]: time="2025-05-08T00:25:48.795736174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:25:48.795888 containerd[1506]: time="2025-05-08T00:25:48.795748317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.795888 containerd[1506]: time="2025-05-08T00:25:48.795824508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:25:48.806826 systemd[1]: Started cri-containerd-316f0c21c435bdc8d0b8958359b7ee964996585a0d859655edf308526c7c3666.scope - libcontainer container 316f0c21c435bdc8d0b8958359b7ee964996585a0d859655edf308526c7c3666. May 8 00:25:48.814597 systemd[1]: Started cri-containerd-01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826.scope - libcontainer container 01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826. May 8 00:25:48.832191 containerd[1506]: time="2025-05-08T00:25:48.832064521Z" level=info msg="StartContainer for \"316f0c21c435bdc8d0b8958359b7ee964996585a0d859655edf308526c7c3666\" returns successfully" May 8 00:25:48.850989 containerd[1506]: time="2025-05-08T00:25:48.850964540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wjf4q,Uid:63c552a3-471d-4009-8ccd-5b4b6b57cbd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\"" May 8 00:25:48.896132 kubelet[2806]: I0508 00:25:48.895580 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-khvv5" podStartSLOduration=0.895566817 podStartE2EDuration="895.566817ms" podCreationTimestamp="2025-05-08 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:25:48.895391612 +0000 UTC m=+7.144278995" watchObservedRunningTime="2025-05-08 00:25:48.895566817 +0000 UTC m=+7.144454200" May 8 00:25:53.268801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28661036.mount: Deactivated successfully. May 8 00:25:54.575169 containerd[1506]: time="2025-05-08T00:25:54.568315529Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:25:54.577093 containerd[1506]: time="2025-05-08T00:25:54.576770408Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.800864871s" May 8 00:25:54.577093 containerd[1506]: time="2025-05-08T00:25:54.576798100Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:25:54.578764 containerd[1506]: time="2025-05-08T00:25:54.578739800Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:25:54.582173 containerd[1506]: time="2025-05-08T00:25:54.582096179Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:25:54.593082 containerd[1506]: time="2025-05-08T00:25:54.592521191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:54.593602 containerd[1506]: time="2025-05-08T00:25:54.593205864Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:54.628084 containerd[1506]: time="2025-05-08T00:25:54.628030389Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\"" May 8 00:25:54.628752 containerd[1506]: time="2025-05-08T00:25:54.628732564Z" level=info msg="StartContainer for \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\"" May 8 00:25:54.729585 systemd[1]: Started cri-containerd-0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6.scope - libcontainer container 0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6. May 8 00:25:54.756613 containerd[1506]: time="2025-05-08T00:25:54.756566370Z" level=info msg="StartContainer for \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\" returns successfully" May 8 00:25:54.764751 systemd[1]: cri-containerd-0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6.scope: Deactivated successfully. May 8 00:25:54.851275 containerd[1506]: time="2025-05-08T00:25:54.838411504Z" level=info msg="shim disconnected" id=0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6 namespace=k8s.io May 8 00:25:54.851275 containerd[1506]: time="2025-05-08T00:25:54.850862763Z" level=warning msg="cleaning up after shim disconnected" id=0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6 namespace=k8s.io May 8 00:25:54.851275 containerd[1506]: time="2025-05-08T00:25:54.850874976Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:25:54.904879 containerd[1506]: time="2025-05-08T00:25:54.904844034Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:25:54.917575 containerd[1506]: time="2025-05-08T00:25:54.917543905Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\"" May 8 00:25:54.918058 containerd[1506]: time="2025-05-08T00:25:54.918037813Z" level=info msg="StartContainer for \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\"" May 8 00:25:54.950579 systemd[1]: Started cri-containerd-3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be.scope - libcontainer container 3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be. May 8 00:25:54.969904 containerd[1506]: time="2025-05-08T00:25:54.969871251Z" level=info msg="StartContainer for \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\" returns successfully" May 8 00:25:54.978873 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:25:54.979063 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:25:54.979186 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:25:54.983106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:25:54.983683 systemd[1]: cri-containerd-3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be.scope: Deactivated successfully. May 8 00:25:55.004204 containerd[1506]: time="2025-05-08T00:25:55.004069892Z" level=info msg="shim disconnected" id=3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be namespace=k8s.io May 8 00:25:55.004204 containerd[1506]: time="2025-05-08T00:25:55.004113244Z" level=warning msg="cleaning up after shim disconnected" id=3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be namespace=k8s.io May 8 00:25:55.004204 containerd[1506]: time="2025-05-08T00:25:55.004121349Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:25:55.012023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:25:55.623629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6-rootfs.mount: Deactivated successfully. May 8 00:25:55.914921 containerd[1506]: time="2025-05-08T00:25:55.914357659Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:25:55.967210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460062114.mount: Deactivated successfully. May 8 00:25:55.972977 containerd[1506]: time="2025-05-08T00:25:55.972933714Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\"" May 8 00:25:55.974510 containerd[1506]: time="2025-05-08T00:25:55.973687897Z" level=info msg="StartContainer for \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\"" May 8 00:25:56.002586 systemd[1]: Started cri-containerd-815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7.scope - libcontainer container 815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7. May 8 00:25:56.025946 systemd[1]: cri-containerd-815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7.scope: Deactivated successfully. May 8 00:25:56.027026 containerd[1506]: time="2025-05-08T00:25:56.026940543Z" level=info msg="StartContainer for \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\" returns successfully" May 8 00:25:56.043940 containerd[1506]: time="2025-05-08T00:25:56.043876953Z" level=info msg="shim disconnected" id=815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7 namespace=k8s.io May 8 00:25:56.043940 containerd[1506]: time="2025-05-08T00:25:56.043923721Z" level=warning msg="cleaning up after shim disconnected" id=815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7 namespace=k8s.io May 8 00:25:56.043940 containerd[1506]: time="2025-05-08T00:25:56.043931485Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:25:56.624365 systemd[1]: run-containerd-runc-k8s.io-815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7-runc.VDOkSv.mount: Deactivated successfully. May 8 00:25:56.625009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7-rootfs.mount: Deactivated successfully. May 8 00:25:56.916304 containerd[1506]: time="2025-05-08T00:25:56.915143353Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:25:56.937864 containerd[1506]: time="2025-05-08T00:25:56.937727047Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\"" May 8 00:25:56.939164 containerd[1506]: time="2025-05-08T00:25:56.939131530Z" level=info msg="StartContainer for \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\"" May 8 00:25:56.975033 systemd[1]: Started cri-containerd-e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb.scope - libcontainer container e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb. May 8 00:25:57.007125 systemd[1]: cri-containerd-e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb.scope: Deactivated successfully. May 8 00:25:57.008045 containerd[1506]: time="2025-05-08T00:25:57.007887283Z" level=info msg="StartContainer for \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\" returns successfully" May 8 00:25:57.050947 containerd[1506]: time="2025-05-08T00:25:57.050888919Z" level=info msg="shim disconnected" id=e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb namespace=k8s.io May 8 00:25:57.050947 containerd[1506]: time="2025-05-08T00:25:57.050937200Z" level=warning msg="cleaning up after shim disconnected" id=e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb namespace=k8s.io May 8 00:25:57.050947 containerd[1506]: time="2025-05-08T00:25:57.050944383Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:25:57.070013 containerd[1506]: time="2025-05-08T00:25:57.069946823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:57.071794 containerd[1506]: time="2025-05-08T00:25:57.071749185Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:25:57.082426 containerd[1506]: time="2025-05-08T00:25:57.082387876Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:57.084500 containerd[1506]: time="2025-05-08T00:25:57.084194046Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.505414514s" May 8 00:25:57.084500 containerd[1506]: time="2025-05-08T00:25:57.084229242Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:25:57.086311 containerd[1506]: time="2025-05-08T00:25:57.086234312Z" level=info msg="CreateContainer within sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:25:57.101930 containerd[1506]: time="2025-05-08T00:25:57.101897050Z" level=info msg="CreateContainer within sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\"" May 8 00:25:57.103084 containerd[1506]: time="2025-05-08T00:25:57.102222434Z" level=info msg="StartContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\"" May 8 00:25:57.123568 systemd[1]: Started cri-containerd-1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248.scope - libcontainer container 1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248. May 8 00:25:57.146209 containerd[1506]: time="2025-05-08T00:25:57.146152739Z" level=info msg="StartContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" returns successfully" May 8 00:25:57.624633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb-rootfs.mount: Deactivated successfully. May 8 00:25:57.918978 containerd[1506]: time="2025-05-08T00:25:57.918795706Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:25:57.929446 kubelet[2806]: I0508 00:25:57.926517 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wjf4q" podStartSLOduration=1.693617672 podStartE2EDuration="9.926496351s" podCreationTimestamp="2025-05-08 00:25:48 +0000 UTC" firstStartedPulling="2025-05-08 00:25:48.852182721 +0000 UTC m=+7.101070104" lastFinishedPulling="2025-05-08 00:25:57.085061401 +0000 UTC m=+15.333948783" observedRunningTime="2025-05-08 00:25:57.926214478 +0000 UTC m=+16.175101860" watchObservedRunningTime="2025-05-08 00:25:57.926496351 +0000 UTC m=+16.175383744" May 8 00:25:57.936723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893556205.mount: Deactivated successfully. May 8 00:25:57.937263 containerd[1506]: time="2025-05-08T00:25:57.936700484Z" level=info msg="CreateContainer within sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\"" May 8 00:25:57.937845 containerd[1506]: time="2025-05-08T00:25:57.937816260Z" level=info msg="StartContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\"" May 8 00:25:57.968561 systemd[1]: Started cri-containerd-a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22.scope - libcontainer container a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22. May 8 00:25:57.988920 containerd[1506]: time="2025-05-08T00:25:57.988887187Z" level=info msg="StartContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" returns successfully" May 8 00:25:58.115108 kubelet[2806]: I0508 00:25:58.115082 2806 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:25:58.150585 systemd[1]: Created slice kubepods-burstable-pod8f5d9f76_0595_4d83_a404_471370c820ec.slice - libcontainer container kubepods-burstable-pod8f5d9f76_0595_4d83_a404_471370c820ec.slice. May 8 00:25:58.156502 systemd[1]: Created slice kubepods-burstable-pode83e1988_4584_4eb5_a409_71f59cbbde37.slice - libcontainer container kubepods-burstable-pode83e1988_4584_4eb5_a409_71f59cbbde37.slice. May 8 00:25:58.245071 kubelet[2806]: I0508 00:25:58.244973 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e1988-4584-4eb5-a409-71f59cbbde37-config-volume\") pod \"coredns-668d6bf9bc-2m44z\" (UID: \"e83e1988-4584-4eb5-a409-71f59cbbde37\") " pod="kube-system/coredns-668d6bf9bc-2m44z" May 8 00:25:58.245071 kubelet[2806]: I0508 00:25:58.245014 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbs6v\" (UniqueName: \"kubernetes.io/projected/e83e1988-4584-4eb5-a409-71f59cbbde37-kube-api-access-nbs6v\") pod \"coredns-668d6bf9bc-2m44z\" (UID: \"e83e1988-4584-4eb5-a409-71f59cbbde37\") " pod="kube-system/coredns-668d6bf9bc-2m44z" May 8 00:25:58.245071 kubelet[2806]: I0508 00:25:58.245032 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f5d9f76-0595-4d83-a404-471370c820ec-config-volume\") pod \"coredns-668d6bf9bc-vmf8t\" (UID: \"8f5d9f76-0595-4d83-a404-471370c820ec\") " pod="kube-system/coredns-668d6bf9bc-vmf8t" May 8 00:25:58.245071 kubelet[2806]: I0508 00:25:58.245046 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrshg\" (UniqueName: \"kubernetes.io/projected/8f5d9f76-0595-4d83-a404-471370c820ec-kube-api-access-vrshg\") pod \"coredns-668d6bf9bc-vmf8t\" (UID: \"8f5d9f76-0595-4d83-a404-471370c820ec\") " pod="kube-system/coredns-668d6bf9bc-vmf8t" May 8 00:25:58.454994 containerd[1506]: time="2025-05-08T00:25:58.454728892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmf8t,Uid:8f5d9f76-0595-4d83-a404-471370c820ec,Namespace:kube-system,Attempt:0,}" May 8 00:25:58.458827 containerd[1506]: time="2025-05-08T00:25:58.458809655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m44z,Uid:e83e1988-4584-4eb5-a409-71f59cbbde37,Namespace:kube-system,Attempt:0,}" May 8 00:25:58.627312 systemd[1]: run-containerd-runc-k8s.io-a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22-runc.aNCoi8.mount: Deactivated successfully. May 8 00:26:00.861526 systemd-networkd[1410]: cilium_host: Link UP May 8 00:26:00.863074 systemd-networkd[1410]: cilium_net: Link UP May 8 00:26:00.864258 systemd-networkd[1410]: cilium_net: Gained carrier May 8 00:26:00.864983 systemd-networkd[1410]: cilium_host: Gained carrier May 8 00:26:00.940089 systemd-networkd[1410]: cilium_vxlan: Link UP May 8 00:26:00.940094 systemd-networkd[1410]: cilium_vxlan: Gained carrier May 8 00:26:01.285618 systemd-networkd[1410]: cilium_host: Gained IPv6LL May 8 00:26:01.295495 kernel: NET: Registered PF_ALG protocol family May 8 00:26:01.339650 systemd-networkd[1410]: cilium_net: Gained IPv6LL May 8 00:26:01.824405 systemd-networkd[1410]: lxc_health: Link UP May 8 00:26:01.831524 systemd-networkd[1410]: lxc_health: Gained carrier May 8 00:26:02.017716 systemd-networkd[1410]: lxcb4cf7509121e: Link UP May 8 00:26:02.021530 kernel: eth0: renamed from tmpa2952 May 8 00:26:02.037697 systemd-networkd[1410]: lxcb4cf7509121e: Gained carrier May 8 00:26:02.037819 systemd-networkd[1410]: lxc1e5a586844cc: Link UP May 8 00:26:02.043617 kernel: eth0: renamed from tmp1b5b7 May 8 00:26:02.051881 systemd-networkd[1410]: lxc1e5a586844cc: Gained carrier May 8 00:26:02.677189 kubelet[2806]: I0508 00:26:02.677134 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-whph8" podStartSLOduration=8.874077189 podStartE2EDuration="14.677118443s" podCreationTimestamp="2025-05-08 00:25:48 +0000 UTC" firstStartedPulling="2025-05-08 00:25:48.774564347 +0000 UTC m=+7.023451730" lastFinishedPulling="2025-05-08 00:25:54.577605601 +0000 UTC m=+12.826492984" observedRunningTime="2025-05-08 00:25:58.947325553 +0000 UTC m=+17.196212976" watchObservedRunningTime="2025-05-08 00:26:02.677118443 +0000 UTC m=+20.926005826" May 8 00:26:02.715595 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL May 8 00:26:03.483663 systemd-networkd[1410]: lxcb4cf7509121e: Gained IPv6LL May 8 00:26:03.611668 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 8 00:26:03.931629 systemd-networkd[1410]: lxc1e5a586844cc: Gained IPv6LL May 8 00:26:05.217610 containerd[1506]: time="2025-05-08T00:26:05.217530783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:05.217610 containerd[1506]: time="2025-05-08T00:26:05.217581908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:05.224433 containerd[1506]: time="2025-05-08T00:26:05.217591126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:05.224433 containerd[1506]: time="2025-05-08T00:26:05.217637602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:05.243202 systemd[1]: run-containerd-runc-k8s.io-a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e-runc.dYmzzS.mount: Deactivated successfully. May 8 00:26:05.248738 containerd[1506]: time="2025-05-08T00:26:05.248479706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:05.248738 containerd[1506]: time="2025-05-08T00:26:05.248587026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:05.248738 containerd[1506]: time="2025-05-08T00:26:05.248603366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:05.253477 containerd[1506]: time="2025-05-08T00:26:05.249539690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:05.262022 systemd[1]: Started cri-containerd-a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e.scope - libcontainer container a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e. May 8 00:26:05.281610 systemd[1]: Started cri-containerd-1b5b7194b3130a09ff20167cb2886271bd2f02cd698028bd9b6922a9735c7182.scope - libcontainer container 1b5b7194b3130a09ff20167cb2886271bd2f02cd698028bd9b6922a9735c7182. May 8 00:26:05.331524 containerd[1506]: time="2025-05-08T00:26:05.330068542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m44z,Uid:e83e1988-4584-4eb5-a409-71f59cbbde37,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e\"" May 8 00:26:05.337546 containerd[1506]: time="2025-05-08T00:26:05.337268313Z" level=info msg="CreateContainer within sandbox \"a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:26:05.343391 containerd[1506]: time="2025-05-08T00:26:05.343357496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmf8t,Uid:8f5d9f76-0595-4d83-a404-471370c820ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5b7194b3130a09ff20167cb2886271bd2f02cd698028bd9b6922a9735c7182\"" May 8 00:26:05.347527 containerd[1506]: time="2025-05-08T00:26:05.346452381Z" level=info msg="CreateContainer within sandbox \"1b5b7194b3130a09ff20167cb2886271bd2f02cd698028bd9b6922a9735c7182\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:26:05.360556 containerd[1506]: time="2025-05-08T00:26:05.360526940Z" level=info msg="CreateContainer within sandbox \"a2952827baef7208a4aa313d4b493eb3dbf50dcf7e0630f2edb8ec573f2b300e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24b9896c3664c5137c26fb0acb1b932c574c86f0c1e13725a14b977df9f67896\"" May 8 00:26:05.362338 containerd[1506]: time="2025-05-08T00:26:05.362246243Z" level=info msg="StartContainer for \"24b9896c3664c5137c26fb0acb1b932c574c86f0c1e13725a14b977df9f67896\"" May 8 00:26:05.363536 containerd[1506]: time="2025-05-08T00:26:05.363447821Z" level=info msg="CreateContainer within sandbox \"1b5b7194b3130a09ff20167cb2886271bd2f02cd698028bd9b6922a9735c7182\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6dd34788054814f42dc9667b94bca7a3deb6063751306531281ef0645b7467d1\"" May 8 00:26:05.365798 containerd[1506]: time="2025-05-08T00:26:05.364317010Z" level=info msg="StartContainer for \"6dd34788054814f42dc9667b94bca7a3deb6063751306531281ef0645b7467d1\"" May 8 00:26:05.392616 systemd[1]: Started cri-containerd-24b9896c3664c5137c26fb0acb1b932c574c86f0c1e13725a14b977df9f67896.scope - libcontainer container 24b9896c3664c5137c26fb0acb1b932c574c86f0c1e13725a14b977df9f67896. May 8 00:26:05.394703 systemd[1]: Started cri-containerd-6dd34788054814f42dc9667b94bca7a3deb6063751306531281ef0645b7467d1.scope - libcontainer container 6dd34788054814f42dc9667b94bca7a3deb6063751306531281ef0645b7467d1. May 8 00:26:05.425813 containerd[1506]: time="2025-05-08T00:26:05.425784984Z" level=info msg="StartContainer for \"24b9896c3664c5137c26fb0acb1b932c574c86f0c1e13725a14b977df9f67896\" returns successfully" May 8 00:26:05.426080 containerd[1506]: time="2025-05-08T00:26:05.425946205Z" level=info msg="StartContainer for \"6dd34788054814f42dc9667b94bca7a3deb6063751306531281ef0645b7467d1\" returns successfully" May 8 00:26:05.964834 kubelet[2806]: I0508 00:26:05.964792 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vmf8t" podStartSLOduration=17.964773426 podStartE2EDuration="17.964773426s" podCreationTimestamp="2025-05-08 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:05.964415149 +0000 UTC m=+24.213302542" watchObservedRunningTime="2025-05-08 00:26:05.964773426 +0000 UTC m=+24.213660819" May 8 00:26:05.966371 kubelet[2806]: I0508 00:26:05.966216 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2m44z" podStartSLOduration=17.966182932 podStartE2EDuration="17.966182932s" podCreationTimestamp="2025-05-08 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:05.950483296 +0000 UTC m=+24.199370689" watchObservedRunningTime="2025-05-08 00:26:05.966182932 +0000 UTC m=+24.215070325" May 8 00:30:19.752931 systemd[1]: Started sshd@7-65.21.107.133:22-139.178.89.65:52858.service - OpenSSH per-connection server daemon (139.178.89.65:52858). May 8 00:30:20.742977 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 52858 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:20.744842 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:20.750173 systemd-logind[1485]: New session 8 of user core. May 8 00:30:20.754675 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:30:21.833248 sshd[4224]: Connection closed by 139.178.89.65 port 52858 May 8 00:30:21.833821 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 8 00:30:21.836357 systemd[1]: sshd@7-65.21.107.133:22-139.178.89.65:52858.service: Deactivated successfully. May 8 00:30:21.839674 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:30:21.841742 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. May 8 00:30:21.843389 systemd-logind[1485]: Removed session 8. May 8 00:30:27.012151 systemd[1]: Started sshd@8-65.21.107.133:22-139.178.89.65:48984.service - OpenSSH per-connection server daemon (139.178.89.65:48984). May 8 00:30:27.993224 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 48984 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:27.995276 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:28.003214 systemd-logind[1485]: New session 9 of user core. May 8 00:30:28.006767 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:30:28.766483 sshd[4239]: Connection closed by 139.178.89.65 port 48984 May 8 00:30:28.767332 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 8 00:30:28.772817 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. May 8 00:30:28.774450 systemd[1]: sshd@8-65.21.107.133:22-139.178.89.65:48984.service: Deactivated successfully. May 8 00:30:28.778700 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:30:28.781118 systemd-logind[1485]: Removed session 9. May 8 00:30:33.938708 systemd[1]: Started sshd@9-65.21.107.133:22-139.178.89.65:48994.service - OpenSSH per-connection server daemon (139.178.89.65:48994). May 8 00:30:34.907372 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 48994 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:34.908835 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:34.914614 systemd-logind[1485]: New session 10 of user core. May 8 00:30:34.916708 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:30:35.632289 sshd[4255]: Connection closed by 139.178.89.65 port 48994 May 8 00:30:35.632901 sshd-session[4253]: pam_unix(sshd:session): session closed for user core May 8 00:30:35.635922 systemd[1]: sshd@9-65.21.107.133:22-139.178.89.65:48994.service: Deactivated successfully. May 8 00:30:35.637519 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:30:35.638196 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. May 8 00:30:35.639160 systemd-logind[1485]: Removed session 10. May 8 00:30:35.803686 systemd[1]: Started sshd@10-65.21.107.133:22-139.178.89.65:49004.service - OpenSSH per-connection server daemon (139.178.89.65:49004). May 8 00:30:36.769696 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 49004 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:36.770910 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:36.775438 systemd-logind[1485]: New session 11 of user core. May 8 00:30:36.783598 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:30:37.554474 sshd[4269]: Connection closed by 139.178.89.65 port 49004 May 8 00:30:37.555023 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 8 00:30:37.559210 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. May 8 00:30:37.559880 systemd[1]: sshd@10-65.21.107.133:22-139.178.89.65:49004.service: Deactivated successfully. May 8 00:30:37.562228 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:30:37.563340 systemd-logind[1485]: Removed session 11. May 8 00:30:37.725721 systemd[1]: Started sshd@11-65.21.107.133:22-139.178.89.65:51810.service - OpenSSH per-connection server daemon (139.178.89.65:51810). May 8 00:30:38.700191 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 51810 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:38.703507 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:38.710172 systemd-logind[1485]: New session 12 of user core. May 8 00:30:38.714601 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:30:39.439424 sshd[4281]: Connection closed by 139.178.89.65 port 51810 May 8 00:30:39.440066 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 8 00:30:39.446142 systemd[1]: sshd@11-65.21.107.133:22-139.178.89.65:51810.service: Deactivated successfully. May 8 00:30:39.448113 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:30:39.449444 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. May 8 00:30:39.450435 systemd-logind[1485]: Removed session 12. May 8 00:30:44.614716 systemd[1]: Started sshd@12-65.21.107.133:22-139.178.89.65:51814.service - OpenSSH per-connection server daemon (139.178.89.65:51814). May 8 00:30:45.590826 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 51814 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:45.592078 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:45.596040 systemd-logind[1485]: New session 13 of user core. May 8 00:30:45.598603 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:30:46.323537 sshd[4297]: Connection closed by 139.178.89.65 port 51814 May 8 00:30:46.324074 sshd-session[4295]: pam_unix(sshd:session): session closed for user core May 8 00:30:46.326583 systemd[1]: sshd@12-65.21.107.133:22-139.178.89.65:51814.service: Deactivated successfully. May 8 00:30:46.328739 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. May 8 00:30:46.329251 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:30:46.330667 systemd-logind[1485]: Removed session 13. May 8 00:30:46.491687 systemd[1]: Started sshd@13-65.21.107.133:22-139.178.89.65:51820.service - OpenSSH per-connection server daemon (139.178.89.65:51820). May 8 00:30:47.455627 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 51820 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:47.457048 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:47.461214 systemd-logind[1485]: New session 14 of user core. May 8 00:30:47.468608 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:30:48.358362 sshd[4311]: Connection closed by 139.178.89.65 port 51820 May 8 00:30:48.359587 sshd-session[4309]: pam_unix(sshd:session): session closed for user core May 8 00:30:48.362429 systemd[1]: sshd@13-65.21.107.133:22-139.178.89.65:51820.service: Deactivated successfully. May 8 00:30:48.364449 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. May 8 00:30:48.365189 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:30:48.366163 systemd-logind[1485]: Removed session 14. May 8 00:30:48.531300 systemd[1]: Started sshd@14-65.21.107.133:22-139.178.89.65:35610.service - OpenSSH per-connection server daemon (139.178.89.65:35610). May 8 00:30:49.509894 sshd[4321]: Accepted publickey for core from 139.178.89.65 port 35610 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:49.511138 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:49.515805 systemd-logind[1485]: New session 15 of user core. May 8 00:30:49.522693 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:30:51.219386 sshd[4325]: Connection closed by 139.178.89.65 port 35610 May 8 00:30:51.220044 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 8 00:30:51.223675 systemd[1]: sshd@14-65.21.107.133:22-139.178.89.65:35610.service: Deactivated successfully. May 8 00:30:51.225310 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:30:51.226034 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. May 8 00:30:51.227249 systemd-logind[1485]: Removed session 15. May 8 00:30:51.390026 systemd[1]: Started sshd@15-65.21.107.133:22-139.178.89.65:35614.service - OpenSSH per-connection server daemon (139.178.89.65:35614). May 8 00:30:52.359384 sshd[4342]: Accepted publickey for core from 139.178.89.65 port 35614 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:52.360635 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:52.365165 systemd-logind[1485]: New session 16 of user core. May 8 00:30:52.373591 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:30:53.208508 sshd[4344]: Connection closed by 139.178.89.65 port 35614 May 8 00:30:53.209086 sshd-session[4342]: pam_unix(sshd:session): session closed for user core May 8 00:30:53.211537 systemd[1]: sshd@15-65.21.107.133:22-139.178.89.65:35614.service: Deactivated successfully. May 8 00:30:53.213385 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:30:53.214620 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. May 8 00:30:53.215637 systemd-logind[1485]: Removed session 16. May 8 00:30:53.375784 systemd[1]: Started sshd@16-65.21.107.133:22-139.178.89.65:35626.service - OpenSSH per-connection server daemon (139.178.89.65:35626). May 8 00:30:54.345806 sshd[4354]: Accepted publickey for core from 139.178.89.65 port 35626 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:30:54.347168 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:30:54.351870 systemd-logind[1485]: New session 17 of user core. May 8 00:30:54.358600 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:30:55.077026 sshd[4356]: Connection closed by 139.178.89.65 port 35626 May 8 00:30:55.077725 sshd-session[4354]: pam_unix(sshd:session): session closed for user core May 8 00:30:55.080804 systemd[1]: sshd@16-65.21.107.133:22-139.178.89.65:35626.service: Deactivated successfully. May 8 00:30:55.082804 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:30:55.083903 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. May 8 00:30:55.085225 systemd-logind[1485]: Removed session 17. May 8 00:31:00.248770 systemd[1]: Started sshd@17-65.21.107.133:22-139.178.89.65:53830.service - OpenSSH per-connection server daemon (139.178.89.65:53830). May 8 00:31:01.217091 sshd[4370]: Accepted publickey for core from 139.178.89.65 port 53830 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:01.218309 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:01.222390 systemd-logind[1485]: New session 18 of user core. May 8 00:31:01.228588 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:31:01.939319 sshd[4372]: Connection closed by 139.178.89.65 port 53830 May 8 00:31:01.940174 sshd-session[4370]: pam_unix(sshd:session): session closed for user core May 8 00:31:01.942785 systemd[1]: sshd@17-65.21.107.133:22-139.178.89.65:53830.service: Deactivated successfully. May 8 00:31:01.944564 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:31:01.945752 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. May 8 00:31:01.947244 systemd-logind[1485]: Removed session 18. May 8 00:31:07.110715 systemd[1]: Started sshd@18-65.21.107.133:22-139.178.89.65:51752.service - OpenSSH per-connection server daemon (139.178.89.65:51752). May 8 00:31:08.076001 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 51752 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:08.077309 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:08.082021 systemd-logind[1485]: New session 19 of user core. May 8 00:31:08.089628 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:31:08.805736 sshd[4387]: Connection closed by 139.178.89.65 port 51752 May 8 00:31:08.806276 sshd-session[4384]: pam_unix(sshd:session): session closed for user core May 8 00:31:08.809406 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. May 8 00:31:08.810054 systemd[1]: sshd@18-65.21.107.133:22-139.178.89.65:51752.service: Deactivated successfully. May 8 00:31:08.812159 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:31:08.813154 systemd-logind[1485]: Removed session 19. May 8 00:31:08.975700 systemd[1]: Started sshd@19-65.21.107.133:22-139.178.89.65:51760.service - OpenSSH per-connection server daemon (139.178.89.65:51760). May 8 00:31:09.940208 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 51760 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:09.941576 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:09.945590 systemd-logind[1485]: New session 20 of user core. May 8 00:31:09.949584 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:31:11.790008 containerd[1506]: time="2025-05-08T00:31:11.789962023Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:31:11.838934 containerd[1506]: time="2025-05-08T00:31:11.838892607Z" level=info msg="StopContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" with timeout 2 (s)" May 8 00:31:11.839489 containerd[1506]: time="2025-05-08T00:31:11.839418962Z" level=info msg="StopContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" with timeout 30 (s)" May 8 00:31:11.841954 containerd[1506]: time="2025-05-08T00:31:11.841797515Z" level=info msg="Stop container \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" with signal terminated" May 8 00:31:11.843285 containerd[1506]: time="2025-05-08T00:31:11.842698802Z" level=info msg="Stop container \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" with signal terminated" May 8 00:31:11.853227 systemd-networkd[1410]: lxc_health: Link DOWN May 8 00:31:11.853704 systemd-networkd[1410]: lxc_health: Lost carrier May 8 00:31:11.855321 systemd[1]: cri-containerd-1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248.scope: Deactivated successfully. May 8 00:31:11.871415 systemd[1]: cri-containerd-a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22.scope: Deactivated successfully. May 8 00:31:11.872010 systemd[1]: cri-containerd-a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22.scope: Consumed 6.538s CPU time, 194.1M memory peak, 71.3M read from disk, 13.3M written to disk. May 8 00:31:11.889071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248-rootfs.mount: Deactivated successfully. May 8 00:31:11.894944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22-rootfs.mount: Deactivated successfully. May 8 00:31:11.897190 containerd[1506]: time="2025-05-08T00:31:11.897140291Z" level=info msg="shim disconnected" id=1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248 namespace=k8s.io May 8 00:31:11.897190 containerd[1506]: time="2025-05-08T00:31:11.897188681Z" level=warning msg="cleaning up after shim disconnected" id=1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248 namespace=k8s.io May 8 00:31:11.897292 containerd[1506]: time="2025-05-08T00:31:11.897196064Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:11.897499 containerd[1506]: time="2025-05-08T00:31:11.897348611Z" level=info msg="shim disconnected" id=a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22 namespace=k8s.io May 8 00:31:11.897499 containerd[1506]: time="2025-05-08T00:31:11.897374780Z" level=warning msg="cleaning up after shim disconnected" id=a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22 namespace=k8s.io May 8 00:31:11.897499 containerd[1506]: time="2025-05-08T00:31:11.897381162Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:11.912953 containerd[1506]: time="2025-05-08T00:31:11.912181754Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:31:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:31:11.913204 containerd[1506]: time="2025-05-08T00:31:11.913174462Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:31:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:31:11.914308 containerd[1506]: time="2025-05-08T00:31:11.914281544Z" level=info msg="StopContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" returns successfully" May 8 00:31:11.915246 containerd[1506]: time="2025-05-08T00:31:11.915229719Z" level=info msg="StopContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" returns successfully" May 8 00:31:11.918355 containerd[1506]: time="2025-05-08T00:31:11.918283595Z" level=info msg="StopPodSandbox for \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\"" May 8 00:31:11.927087 containerd[1506]: time="2025-05-08T00:31:11.927049202Z" level=info msg="StopPodSandbox for \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\"" May 8 00:31:11.927857 containerd[1506]: time="2025-05-08T00:31:11.927069480Z" level=info msg="Container to stop \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.929312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826-shm.mount: Deactivated successfully. May 8 00:31:11.931964 containerd[1506]: time="2025-05-08T00:31:11.922599603Z" level=info msg="Container to stop \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.931964 containerd[1506]: time="2025-05-08T00:31:11.931947641Z" level=info msg="Container to stop \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.931964 containerd[1506]: time="2025-05-08T00:31:11.931956828Z" level=info msg="Container to stop \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.931964 containerd[1506]: time="2025-05-08T00:31:11.931963340Z" level=info msg="Container to stop \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.932317 containerd[1506]: time="2025-05-08T00:31:11.931969742Z" level=info msg="Container to stop \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:31:11.937614 systemd[1]: cri-containerd-5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e.scope: Deactivated successfully. May 8 00:31:11.949024 systemd[1]: cri-containerd-01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826.scope: Deactivated successfully. May 8 00:31:11.964954 containerd[1506]: time="2025-05-08T00:31:11.964913313Z" level=info msg="shim disconnected" id=5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e namespace=k8s.io May 8 00:31:11.964954 containerd[1506]: time="2025-05-08T00:31:11.964954039Z" level=warning msg="cleaning up after shim disconnected" id=5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e namespace=k8s.io May 8 00:31:11.965080 containerd[1506]: time="2025-05-08T00:31:11.964960672Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:11.970746 containerd[1506]: time="2025-05-08T00:31:11.970705304Z" level=info msg="shim disconnected" id=01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826 namespace=k8s.io May 8 00:31:11.970746 containerd[1506]: time="2025-05-08T00:31:11.970741040Z" level=warning msg="cleaning up after shim disconnected" id=01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826 namespace=k8s.io May 8 00:31:11.970746 containerd[1506]: time="2025-05-08T00:31:11.970747864Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:11.971376 kubelet[2806]: E0508 00:31:11.962360 2806 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:31:11.981859 containerd[1506]: time="2025-05-08T00:31:11.981435798Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:31:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:31:11.982347 containerd[1506]: time="2025-05-08T00:31:11.982296870Z" level=info msg="TearDown network for sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" successfully" May 8 00:31:11.982347 containerd[1506]: time="2025-05-08T00:31:11.982331545Z" level=info msg="StopPodSandbox for \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" returns successfully" May 8 00:31:11.982646 containerd[1506]: time="2025-05-08T00:31:11.982566334Z" level=info msg="TearDown network for sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" successfully" May 8 00:31:11.982646 containerd[1506]: time="2025-05-08T00:31:11.982581423Z" level=info msg="StopPodSandbox for \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" returns successfully" May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088581 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8j8m\" (UniqueName: \"kubernetes.io/projected/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-kube-api-access-k8j8m\") pod \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\" (UID: \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\") " May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088633 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-bpf-maps\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088670 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-etc-cni-netd\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088693 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-kernel\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088714 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-run\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.089335 kubelet[2806]: I0508 00:31:12.088732 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-lib-modules\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088757 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-cilium-config-path\") pod \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\" (UID: \"63c552a3-471d-4009-8ccd-5b4b6b57cbd2\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088779 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-config-path\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088801 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-hubble-tls\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088818 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-net\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088839 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-cgroup\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090147 kubelet[2806]: I0508 00:31:12.088862 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfdwh\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-kube-api-access-dfdwh\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090334 kubelet[2806]: I0508 00:31:12.088882 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-hostproc\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090334 kubelet[2806]: I0508 00:31:12.088902 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-xtables-lock\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090334 kubelet[2806]: I0508 00:31:12.088928 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca470337-5ff3-43a5-b7a0-3833479e3537-clustermesh-secrets\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.090334 kubelet[2806]: I0508 00:31:12.088947 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cni-path\") pod \"ca470337-5ff3-43a5-b7a0-3833479e3537\" (UID: \"ca470337-5ff3-43a5-b7a0-3833479e3537\") " May 8 00:31:12.095497 kubelet[2806]: I0508 00:31:12.094024 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:31:12.095566 kubelet[2806]: I0508 00:31:12.094397 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cni-path" (OuterVolumeSpecName: "cni-path") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.098548 kubelet[2806]: I0508 00:31:12.098512 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.098597 kubelet[2806]: I0508 00:31:12.098555 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.101789 kubelet[2806]: I0508 00:31:12.101392 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-hostproc" (OuterVolumeSpecName: "hostproc") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.101789 kubelet[2806]: I0508 00:31:12.101424 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.103527 kubelet[2806]: I0508 00:31:12.103503 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca470337-5ff3-43a5-b7a0-3833479e3537-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:31:12.103576 kubelet[2806]: I0508 00:31:12.103551 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.103604 kubelet[2806]: I0508 00:31:12.103585 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.103625 kubelet[2806]: I0508 00:31:12.103603 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.103625 kubelet[2806]: I0508 00:31:12.103621 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.103682 kubelet[2806]: I0508 00:31:12.103636 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:31:12.106085 kubelet[2806]: I0508 00:31:12.106046 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "63c552a3-471d-4009-8ccd-5b4b6b57cbd2" (UID: "63c552a3-471d-4009-8ccd-5b4b6b57cbd2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:31:12.107257 kubelet[2806]: I0508 00:31:12.107237 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-kube-api-access-k8j8m" (OuterVolumeSpecName: "kube-api-access-k8j8m") pod "63c552a3-471d-4009-8ccd-5b4b6b57cbd2" (UID: "63c552a3-471d-4009-8ccd-5b4b6b57cbd2"). InnerVolumeSpecName "kube-api-access-k8j8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:31:12.107324 kubelet[2806]: I0508 00:31:12.107272 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:31:12.107324 kubelet[2806]: I0508 00:31:12.107284 2806 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-kube-api-access-dfdwh" (OuterVolumeSpecName: "kube-api-access-dfdwh") pod "ca470337-5ff3-43a5-b7a0-3833479e3537" (UID: "ca470337-5ff3-43a5-b7a0-3833479e3537"). InnerVolumeSpecName "kube-api-access-dfdwh". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:31:12.189758 kubelet[2806]: I0508 00:31:12.189710 2806 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-etc-cni-netd\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189758 kubelet[2806]: I0508 00:31:12.189747 2806 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8j8m\" (UniqueName: \"kubernetes.io/projected/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-kube-api-access-k8j8m\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189758 kubelet[2806]: I0508 00:31:12.189760 2806 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-bpf-maps\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189774 2806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-kernel\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189786 2806 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c552a3-471d-4009-8ccd-5b4b6b57cbd2-cilium-config-path\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189797 2806 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-run\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189809 2806 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-lib-modules\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189819 2806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-host-proc-sys-net\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189830 2806 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-config-path\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189843 2806 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-hubble-tls\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.189945 kubelet[2806]: I0508 00:31:12.189855 2806 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cilium-cgroup\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.190124 kubelet[2806]: I0508 00:31:12.189865 2806 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-xtables-lock\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.190124 kubelet[2806]: I0508 00:31:12.189876 2806 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dfdwh\" (UniqueName: \"kubernetes.io/projected/ca470337-5ff3-43a5-b7a0-3833479e3537-kube-api-access-dfdwh\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.190124 kubelet[2806]: I0508 00:31:12.189886 2806 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-hostproc\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.190124 kubelet[2806]: I0508 00:31:12.189899 2806 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca470337-5ff3-43a5-b7a0-3833479e3537-cni-path\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.190124 kubelet[2806]: I0508 00:31:12.189910 2806 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca470337-5ff3-43a5-b7a0-3833479e3537-clustermesh-secrets\") on node \"ci-4230-1-1-n-afb3d0be43\" DevicePath \"\"" May 8 00:31:12.511551 kubelet[2806]: I0508 00:31:12.511384 2806 scope.go:117] "RemoveContainer" containerID="a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22" May 8 00:31:12.521836 systemd[1]: Removed slice kubepods-burstable-podca470337_5ff3_43a5_b7a0_3833479e3537.slice - libcontainer container kubepods-burstable-podca470337_5ff3_43a5_b7a0_3833479e3537.slice. May 8 00:31:12.521947 systemd[1]: kubepods-burstable-podca470337_5ff3_43a5_b7a0_3833479e3537.slice: Consumed 6.598s CPU time, 194.4M memory peak, 71.3M read from disk, 13.3M written to disk. May 8 00:31:12.528159 containerd[1506]: time="2025-05-08T00:31:12.528120999Z" level=info msg="RemoveContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\"" May 8 00:31:12.532293 containerd[1506]: time="2025-05-08T00:31:12.532154659Z" level=info msg="RemoveContainer for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" returns successfully" May 8 00:31:12.535492 kubelet[2806]: I0508 00:31:12.535383 2806 scope.go:117] "RemoveContainer" containerID="e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb" May 8 00:31:12.544510 containerd[1506]: time="2025-05-08T00:31:12.543612074Z" level=info msg="RemoveContainer for \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\"" May 8 00:31:12.544189 systemd[1]: Removed slice kubepods-besteffort-pod63c552a3_471d_4009_8ccd_5b4b6b57cbd2.slice - libcontainer container kubepods-besteffort-pod63c552a3_471d_4009_8ccd_5b4b6b57cbd2.slice. May 8 00:31:12.552739 containerd[1506]: time="2025-05-08T00:31:12.552644159Z" level=info msg="RemoveContainer for \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\" returns successfully" May 8 00:31:12.559793 kubelet[2806]: I0508 00:31:12.559545 2806 scope.go:117] "RemoveContainer" containerID="815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7" May 8 00:31:12.561539 containerd[1506]: time="2025-05-08T00:31:12.561497480Z" level=info msg="RemoveContainer for \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\"" May 8 00:31:12.564226 containerd[1506]: time="2025-05-08T00:31:12.564196512Z" level=info msg="RemoveContainer for \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\" returns successfully" May 8 00:31:12.565477 kubelet[2806]: I0508 00:31:12.565411 2806 scope.go:117] "RemoveContainer" containerID="3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be" May 8 00:31:12.567127 containerd[1506]: time="2025-05-08T00:31:12.567103202Z" level=info msg="RemoveContainer for \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\"" May 8 00:31:12.569168 containerd[1506]: time="2025-05-08T00:31:12.569141718Z" level=info msg="RemoveContainer for \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\" returns successfully" May 8 00:31:12.569278 kubelet[2806]: I0508 00:31:12.569254 2806 scope.go:117] "RemoveContainer" containerID="0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6" May 8 00:31:12.570027 containerd[1506]: time="2025-05-08T00:31:12.570006277Z" level=info msg="RemoveContainer for \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\"" May 8 00:31:12.572043 containerd[1506]: time="2025-05-08T00:31:12.572020887Z" level=info msg="RemoveContainer for \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\" returns successfully" May 8 00:31:12.572180 kubelet[2806]: I0508 00:31:12.572160 2806 scope.go:117] "RemoveContainer" containerID="a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22" May 8 00:31:12.572374 containerd[1506]: time="2025-05-08T00:31:12.572344473Z" level=error msg="ContainerStatus for \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\": not found" May 8 00:31:12.574745 kubelet[2806]: E0508 00:31:12.573421 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\": not found" containerID="a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22" May 8 00:31:12.583442 kubelet[2806]: I0508 00:31:12.574761 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22"} err="failed to get container status \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6904b80dbc8cfed91a6d1562660b7a29ba34f636795e48fffe1efe3cc40aa22\": not found" May 8 00:31:12.583442 kubelet[2806]: I0508 00:31:12.583432 2806 scope.go:117] "RemoveContainer" containerID="e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb" May 8 00:31:12.583723 containerd[1506]: time="2025-05-08T00:31:12.583638491Z" level=error msg="ContainerStatus for \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\": not found" May 8 00:31:12.584077 kubelet[2806]: E0508 00:31:12.583851 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\": not found" containerID="e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb" May 8 00:31:12.584077 kubelet[2806]: I0508 00:31:12.583885 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb"} err="failed to get container status \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e77af563f1c44212da04f7581a460443f45cd94470b51558c9a671292d3b73bb\": not found" May 8 00:31:12.584077 kubelet[2806]: I0508 00:31:12.583900 2806 scope.go:117] "RemoveContainer" containerID="815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7" May 8 00:31:12.584165 containerd[1506]: time="2025-05-08T00:31:12.584028873Z" level=error msg="ContainerStatus for \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\": not found" May 8 00:31:12.584187 kubelet[2806]: E0508 00:31:12.584110 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\": not found" containerID="815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7" May 8 00:31:12.584187 kubelet[2806]: I0508 00:31:12.584124 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7"} err="failed to get container status \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\": rpc error: code = NotFound desc = an error occurred when try to find container \"815668a220fbb7ddd7bec59cc2529d31266f5c09150421e76a33c8d364d58ea7\": not found" May 8 00:31:12.584187 kubelet[2806]: I0508 00:31:12.584135 2806 scope.go:117] "RemoveContainer" containerID="3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be" May 8 00:31:12.584282 containerd[1506]: time="2025-05-08T00:31:12.584233405Z" level=error msg="ContainerStatus for \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\": not found" May 8 00:31:12.584359 kubelet[2806]: E0508 00:31:12.584319 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\": not found" containerID="3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be" May 8 00:31:12.584359 kubelet[2806]: I0508 00:31:12.584336 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be"} err="failed to get container status \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d5055963467303f357223ddd7b4b16dc34a87841fc67f6ff0f0d6bb0e3582be\": not found" May 8 00:31:12.584359 kubelet[2806]: I0508 00:31:12.584348 2806 scope.go:117] "RemoveContainer" containerID="0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6" May 8 00:31:12.584519 containerd[1506]: time="2025-05-08T00:31:12.584448949Z" level=error msg="ContainerStatus for \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\": not found" May 8 00:31:12.584571 kubelet[2806]: E0508 00:31:12.584547 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\": not found" containerID="0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6" May 8 00:31:12.584571 kubelet[2806]: I0508 00:31:12.584561 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6"} err="failed to get container status \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0156acb2a8360b66393544d07902cce96f481b9c4469a34dc5eeda647a2099b6\": not found" May 8 00:31:12.584616 kubelet[2806]: I0508 00:31:12.584572 2806 scope.go:117] "RemoveContainer" containerID="1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248" May 8 00:31:12.585483 containerd[1506]: time="2025-05-08T00:31:12.585275927Z" level=info msg="RemoveContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\"" May 8 00:31:12.587309 containerd[1506]: time="2025-05-08T00:31:12.587286781Z" level=info msg="RemoveContainer for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" returns successfully" May 8 00:31:12.587477 kubelet[2806]: I0508 00:31:12.587396 2806 scope.go:117] "RemoveContainer" containerID="1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248" May 8 00:31:12.587549 containerd[1506]: time="2025-05-08T00:31:12.587521429Z" level=error msg="ContainerStatus for \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\": not found" May 8 00:31:12.587623 kubelet[2806]: E0508 00:31:12.587611 2806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\": not found" containerID="1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248" May 8 00:31:12.587703 kubelet[2806]: I0508 00:31:12.587627 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248"} err="failed to get container status \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ea1f8d983f222e6a9627549eea59cc7698c5f43e8b4ddedd31a57b1c0b43248\": not found" May 8 00:31:12.770286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826-rootfs.mount: Deactivated successfully. May 8 00:31:12.770394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e-rootfs.mount: Deactivated successfully. May 8 00:31:12.770485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e-shm.mount: Deactivated successfully. May 8 00:31:12.770555 systemd[1]: var-lib-kubelet-pods-63c552a3\x2d471d\x2d4009\x2d8ccd\x2d5b4b6b57cbd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8j8m.mount: Deactivated successfully. May 8 00:31:12.770672 systemd[1]: var-lib-kubelet-pods-ca470337\x2d5ff3\x2d43a5\x2db7a0\x2d3833479e3537-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfdwh.mount: Deactivated successfully. May 8 00:31:12.770785 systemd[1]: var-lib-kubelet-pods-ca470337\x2d5ff3\x2d43a5\x2db7a0\x2d3833479e3537-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:31:12.770869 systemd[1]: var-lib-kubelet-pods-ca470337\x2d5ff3\x2d43a5\x2db7a0\x2d3833479e3537-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:31:13.839621 kubelet[2806]: I0508 00:31:13.839580 2806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c552a3-471d-4009-8ccd-5b4b6b57cbd2" path="/var/lib/kubelet/pods/63c552a3-471d-4009-8ccd-5b4b6b57cbd2/volumes" May 8 00:31:13.840003 kubelet[2806]: I0508 00:31:13.839971 2806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca470337-5ff3-43a5-b7a0-3833479e3537" path="/var/lib/kubelet/pods/ca470337-5ff3-43a5-b7a0-3833479e3537/volumes" May 8 00:31:13.861809 sshd[4401]: Connection closed by 139.178.89.65 port 51760 May 8 00:31:13.862268 sshd-session[4399]: pam_unix(sshd:session): session closed for user core May 8 00:31:13.868699 systemd[1]: sshd@19-65.21.107.133:22-139.178.89.65:51760.service: Deactivated successfully. May 8 00:31:13.870178 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:31:13.871231 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. May 8 00:31:13.872356 systemd-logind[1485]: Removed session 20. May 8 00:31:14.033702 systemd[1]: Started sshd@20-65.21.107.133:22-139.178.89.65:51772.service - OpenSSH per-connection server daemon (139.178.89.65:51772). May 8 00:31:15.015042 sshd[4559]: Accepted publickey for core from 139.178.89.65 port 51772 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:15.016679 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:15.022005 systemd-logind[1485]: New session 21 of user core. May 8 00:31:15.031642 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:31:16.129579 kubelet[2806]: I0508 00:31:16.129539 2806 memory_manager.go:355] "RemoveStaleState removing state" podUID="63c552a3-471d-4009-8ccd-5b4b6b57cbd2" containerName="cilium-operator" May 8 00:31:16.129579 kubelet[2806]: I0508 00:31:16.129568 2806 memory_manager.go:355] "RemoveStaleState removing state" podUID="ca470337-5ff3-43a5-b7a0-3833479e3537" containerName="cilium-agent" May 8 00:31:16.160898 systemd[1]: Created slice kubepods-burstable-pod8777a4a7_9256_40c2_a980_bb0aba0c2668.slice - libcontainer container kubepods-burstable-pod8777a4a7_9256_40c2_a980_bb0aba0c2668.slice. May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216699 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-bpf-maps\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216735 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-etc-cni-netd\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216752 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8777a4a7-9256-40c2-a980-bb0aba0c2668-hubble-tls\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216765 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-hostproc\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216778 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-cilium-cgroup\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.216898 kubelet[2806]: I0508 00:31:16.216790 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-cni-path\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.217114 kubelet[2806]: I0508 00:31:16.216802 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8777a4a7-9256-40c2-a980-bb0aba0c2668-cilium-config-path\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.217114 kubelet[2806]: I0508 00:31:16.216823 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-cilium-run\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.217114 kubelet[2806]: I0508 00:31:16.216836 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-host-proc-sys-net\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.217114 kubelet[2806]: I0508 00:31:16.216860 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-host-proc-sys-kernel\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.220115 kubelet[2806]: I0508 00:31:16.220059 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-lib-modules\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.220115 kubelet[2806]: I0508 00:31:16.220094 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8777a4a7-9256-40c2-a980-bb0aba0c2668-cilium-ipsec-secrets\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.220115 kubelet[2806]: I0508 00:31:16.220113 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ljd\" (UniqueName: \"kubernetes.io/projected/8777a4a7-9256-40c2-a980-bb0aba0c2668-kube-api-access-47ljd\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.220222 kubelet[2806]: I0508 00:31:16.220126 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8777a4a7-9256-40c2-a980-bb0aba0c2668-xtables-lock\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.220222 kubelet[2806]: I0508 00:31:16.220146 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8777a4a7-9256-40c2-a980-bb0aba0c2668-clustermesh-secrets\") pod \"cilium-7qh45\" (UID: \"8777a4a7-9256-40c2-a980-bb0aba0c2668\") " pod="kube-system/cilium-7qh45" May 8 00:31:16.304876 sshd[4561]: Connection closed by 139.178.89.65 port 51772 May 8 00:31:16.305488 sshd-session[4559]: pam_unix(sshd:session): session closed for user core May 8 00:31:16.308952 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. May 8 00:31:16.309501 systemd[1]: sshd@20-65.21.107.133:22-139.178.89.65:51772.service: Deactivated successfully. May 8 00:31:16.312213 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:31:16.313697 systemd-logind[1485]: Removed session 21. May 8 00:31:16.465440 containerd[1506]: time="2025-05-08T00:31:16.465307113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qh45,Uid:8777a4a7-9256-40c2-a980-bb0aba0c2668,Namespace:kube-system,Attempt:0,}" May 8 00:31:16.480719 systemd[1]: Started sshd@21-65.21.107.133:22-139.178.89.65:51776.service - OpenSSH per-connection server daemon (139.178.89.65:51776). May 8 00:31:16.503126 containerd[1506]: time="2025-05-08T00:31:16.502354275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:16.504173 containerd[1506]: time="2025-05-08T00:31:16.504124278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:16.504173 containerd[1506]: time="2025-05-08T00:31:16.504146269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:16.504522 containerd[1506]: time="2025-05-08T00:31:16.504207253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:16.520602 systemd[1]: Started cri-containerd-eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe.scope - libcontainer container eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe. May 8 00:31:16.542474 containerd[1506]: time="2025-05-08T00:31:16.542414857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qh45,Uid:8777a4a7-9256-40c2-a980-bb0aba0c2668,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\"" May 8 00:31:16.544451 containerd[1506]: time="2025-05-08T00:31:16.544423276Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:31:16.553867 containerd[1506]: time="2025-05-08T00:31:16.553831415Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9\"" May 8 00:31:16.554286 containerd[1506]: time="2025-05-08T00:31:16.554231214Z" level=info msg="StartContainer for \"b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9\"" May 8 00:31:16.574599 systemd[1]: Started cri-containerd-b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9.scope - libcontainer container b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9. May 8 00:31:16.592557 containerd[1506]: time="2025-05-08T00:31:16.592220799Z" level=info msg="StartContainer for \"b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9\" returns successfully" May 8 00:31:16.603053 systemd[1]: cri-containerd-b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9.scope: Deactivated successfully. May 8 00:31:16.603262 systemd[1]: cri-containerd-b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9.scope: Consumed 14ms CPU time, 8.4M memory peak, 1.9M read from disk. May 8 00:31:16.631566 containerd[1506]: time="2025-05-08T00:31:16.631480803Z" level=info msg="shim disconnected" id=b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9 namespace=k8s.io May 8 00:31:16.631566 containerd[1506]: time="2025-05-08T00:31:16.631563488Z" level=warning msg="cleaning up after shim disconnected" id=b0794c20a8870b3c45b66cc70642af182ee9bef2ffeabd3f0228959a6f5245f9 namespace=k8s.io May 8 00:31:16.631566 containerd[1506]: time="2025-05-08T00:31:16.631571954Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:16.972546 kubelet[2806]: E0508 00:31:16.972441 2806 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:31:17.457392 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 51776 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:17.458724 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:17.464250 systemd-logind[1485]: New session 22 of user core. May 8 00:31:17.470586 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:31:17.553988 containerd[1506]: time="2025-05-08T00:31:17.553926641Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:31:17.569034 containerd[1506]: time="2025-05-08T00:31:17.568901259Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d\"" May 8 00:31:17.571215 containerd[1506]: time="2025-05-08T00:31:17.570111884Z" level=info msg="StartContainer for \"3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d\"" May 8 00:31:17.603585 systemd[1]: Started cri-containerd-3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d.scope - libcontainer container 3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d. May 8 00:31:17.623825 containerd[1506]: time="2025-05-08T00:31:17.623489502Z" level=info msg="StartContainer for \"3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d\" returns successfully" May 8 00:31:17.632780 systemd[1]: cri-containerd-3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d.scope: Deactivated successfully. May 8 00:31:17.633109 systemd[1]: cri-containerd-3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d.scope: Consumed 16ms CPU time, 6.8M memory peak, 1.5M read from disk. May 8 00:31:17.654814 containerd[1506]: time="2025-05-08T00:31:17.654739363Z" level=info msg="shim disconnected" id=3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d namespace=k8s.io May 8 00:31:17.654814 containerd[1506]: time="2025-05-08T00:31:17.654787584Z" level=warning msg="cleaning up after shim disconnected" id=3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d namespace=k8s.io May 8 00:31:17.654814 containerd[1506]: time="2025-05-08T00:31:17.654794977Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:18.126741 sshd[4683]: Connection closed by 139.178.89.65 port 51776 May 8 00:31:18.127371 sshd-session[4576]: pam_unix(sshd:session): session closed for user core May 8 00:31:18.130370 systemd[1]: sshd@21-65.21.107.133:22-139.178.89.65:51776.service: Deactivated successfully. May 8 00:31:18.132451 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. May 8 00:31:18.132812 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:31:18.133813 systemd-logind[1485]: Removed session 22. May 8 00:31:18.296721 systemd[1]: Started sshd@22-65.21.107.133:22-139.178.89.65:43402.service - OpenSSH per-connection server daemon (139.178.89.65:43402). May 8 00:31:18.325419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e168a7dcce39ad2306705040ce46f38ab0e986b4972c2888c2e8d0eb4f4e60d-rootfs.mount: Deactivated successfully. May 8 00:31:18.555944 containerd[1506]: time="2025-05-08T00:31:18.555887028Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:31:18.575045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220097518.mount: Deactivated successfully. May 8 00:31:18.575392 containerd[1506]: time="2025-05-08T00:31:18.575360817Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7\"" May 8 00:31:18.578243 containerd[1506]: time="2025-05-08T00:31:18.576204456Z" level=info msg="StartContainer for \"17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7\"" May 8 00:31:18.609639 systemd[1]: Started cri-containerd-17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7.scope - libcontainer container 17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7. May 8 00:31:18.632805 containerd[1506]: time="2025-05-08T00:31:18.632730697Z" level=info msg="StartContainer for \"17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7\" returns successfully" May 8 00:31:18.638967 systemd[1]: cri-containerd-17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7.scope: Deactivated successfully. May 8 00:31:18.660072 containerd[1506]: time="2025-05-08T00:31:18.660020857Z" level=info msg="shim disconnected" id=17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7 namespace=k8s.io May 8 00:31:18.660201 containerd[1506]: time="2025-05-08T00:31:18.660064097Z" level=warning msg="cleaning up after shim disconnected" id=17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7 namespace=k8s.io May 8 00:31:18.660201 containerd[1506]: time="2025-05-08T00:31:18.660086609Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:18.878670 kubelet[2806]: I0508 00:31:18.878182 2806 setters.go:602] "Node became not ready" node="ci-4230-1-1-n-afb3d0be43" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:31:18Z","lastTransitionTime":"2025-05-08T00:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:31:19.263695 sshd[4752]: Accepted publickey for core from 139.178.89.65 port 43402 ssh2: RSA SHA256:qMyTVsqQW1GBCDszN/f3BerP0jiPRN/kuPF/1ixxUUo May 8 00:31:19.265122 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:31:19.270562 systemd-logind[1485]: New session 23 of user core. May 8 00:31:19.276639 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:31:19.326251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17c7d9f098d0e816fdc9319cde25dac7932a88067139bbff2f6a25329e1b1ae7-rootfs.mount: Deactivated successfully. May 8 00:31:19.560550 containerd[1506]: time="2025-05-08T00:31:19.560414958Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:31:19.571919 containerd[1506]: time="2025-05-08T00:31:19.571879346Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be\"" May 8 00:31:19.573694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497314704.mount: Deactivated successfully. May 8 00:31:19.575631 containerd[1506]: time="2025-05-08T00:31:19.574285530Z" level=info msg="StartContainer for \"fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be\"" May 8 00:31:19.604564 systemd[1]: Started cri-containerd-fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be.scope - libcontainer container fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be. May 8 00:31:19.623868 systemd[1]: cri-containerd-fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be.scope: Deactivated successfully. May 8 00:31:19.624518 containerd[1506]: time="2025-05-08T00:31:19.624320541Z" level=info msg="StartContainer for \"fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be\" returns successfully" May 8 00:31:19.642098 containerd[1506]: time="2025-05-08T00:31:19.642034246Z" level=info msg="shim disconnected" id=fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be namespace=k8s.io May 8 00:31:19.642098 containerd[1506]: time="2025-05-08T00:31:19.642077898Z" level=warning msg="cleaning up after shim disconnected" id=fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be namespace=k8s.io May 8 00:31:19.642098 containerd[1506]: time="2025-05-08T00:31:19.642085011Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:20.325601 systemd[1]: run-containerd-runc-k8s.io-fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be-runc.TSQz4b.mount: Deactivated successfully. May 8 00:31:20.325715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae9dee9ffec83b06d2dfe29578e5d757befd9f104af9b7a8f5bd0dc8b5e65be-rootfs.mount: Deactivated successfully. May 8 00:31:20.563060 containerd[1506]: time="2025-05-08T00:31:20.562944498Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:31:20.593258 containerd[1506]: time="2025-05-08T00:31:20.593176856Z" level=info msg="CreateContainer within sandbox \"eb7f35216e77c43fa1123b171ed0ab36edac3f900cf28db5cfb94f6ba4d2f5fe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266\"" May 8 00:31:20.594406 containerd[1506]: time="2025-05-08T00:31:20.594377744Z" level=info msg="StartContainer for \"673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266\"" May 8 00:31:20.619569 systemd[1]: Started cri-containerd-673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266.scope - libcontainer container 673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266. May 8 00:31:20.639191 containerd[1506]: time="2025-05-08T00:31:20.639157027Z" level=info msg="StartContainer for \"673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266\" returns successfully" May 8 00:31:21.023538 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:31:23.605812 systemd-networkd[1410]: lxc_health: Link UP May 8 00:31:23.606041 systemd-networkd[1410]: lxc_health: Gained carrier May 8 00:31:24.258161 systemd[1]: run-containerd-runc-k8s.io-673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266-runc.POysER.mount: Deactivated successfully. May 8 00:31:24.480991 kubelet[2806]: I0508 00:31:24.480935 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7qh45" podStartSLOduration=8.480919771 podStartE2EDuration="8.480919771s" podCreationTimestamp="2025-05-08 00:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:31:21.589199856 +0000 UTC m=+339.838087280" watchObservedRunningTime="2025-05-08 00:31:24.480919771 +0000 UTC m=+342.729807154" May 8 00:31:25.595609 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 8 00:31:28.500054 systemd[1]: run-containerd-runc-k8s.io-673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266-runc.03ieV0.mount: Deactivated successfully. May 8 00:31:30.607230 systemd[1]: run-containerd-runc-k8s.io-673623ea70186d05a2b6f1bc7e658068667a61903bc2419005824a1efa36c266-runc.bZTA2N.mount: Deactivated successfully. May 8 00:31:30.805250 sshd[4813]: Connection closed by 139.178.89.65 port 43402 May 8 00:31:30.806276 sshd-session[4752]: pam_unix(sshd:session): session closed for user core May 8 00:31:30.811893 systemd[1]: sshd@22-65.21.107.133:22-139.178.89.65:43402.service: Deactivated successfully. May 8 00:31:30.813547 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:31:30.814779 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. May 8 00:31:30.815981 systemd-logind[1485]: Removed session 23. May 8 00:31:41.859433 containerd[1506]: time="2025-05-08T00:31:41.859357915Z" level=info msg="StopPodSandbox for \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\"" May 8 00:31:41.859433 containerd[1506]: time="2025-05-08T00:31:41.859470776Z" level=info msg="TearDown network for sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" successfully" May 8 00:31:41.859433 containerd[1506]: time="2025-05-08T00:31:41.859486505Z" level=info msg="StopPodSandbox for \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" returns successfully" May 8 00:31:41.860262 containerd[1506]: time="2025-05-08T00:31:41.860223645Z" level=info msg="RemovePodSandbox for \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\"" May 8 00:31:41.860262 containerd[1506]: time="2025-05-08T00:31:41.860255274Z" level=info msg="Forcibly stopping sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\"" May 8 00:31:41.860347 containerd[1506]: time="2025-05-08T00:31:41.860297303Z" level=info msg="TearDown network for sandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" successfully" May 8 00:31:41.863393 containerd[1506]: time="2025-05-08T00:31:41.863355216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:31:41.863479 containerd[1506]: time="2025-05-08T00:31:41.863402054Z" level=info msg="RemovePodSandbox \"5d88acd2051548aff0609310030e7709dffe1905576f5b4b64a9b8295b5a5c5e\" returns successfully" May 8 00:31:41.863755 containerd[1506]: time="2025-05-08T00:31:41.863714569Z" level=info msg="StopPodSandbox for \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\"" May 8 00:31:41.863851 containerd[1506]: time="2025-05-08T00:31:41.863810248Z" level=info msg="TearDown network for sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" successfully" May 8 00:31:41.863851 containerd[1506]: time="2025-05-08T00:31:41.863843910Z" level=info msg="StopPodSandbox for \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" returns successfully" May 8 00:31:41.864137 containerd[1506]: time="2025-05-08T00:31:41.864110390Z" level=info msg="RemovePodSandbox for \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\"" May 8 00:31:41.864137 containerd[1506]: time="2025-05-08T00:31:41.864133703Z" level=info msg="Forcibly stopping sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\"" May 8 00:31:41.864206 containerd[1506]: time="2025-05-08T00:31:41.864173437Z" level=info msg="TearDown network for sandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" successfully" May 8 00:31:41.867127 containerd[1506]: time="2025-05-08T00:31:41.867094525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:31:41.867226 containerd[1506]: time="2025-05-08T00:31:41.867134009Z" level=info msg="RemovePodSandbox \"01cee1e0cf5989c92724dcccfec7dafc7759ba663fb35beafb72db941f672826\" returns successfully" May 8 00:31:46.536252 kubelet[2806]: E0508 00:31:46.536183 2806 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58154->10.0.0.2:2379: read: connection timed out" May 8 00:31:47.440896 systemd[1]: cri-containerd-a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326.scope: Deactivated successfully. May 8 00:31:47.443731 systemd[1]: cri-containerd-a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326.scope: Consumed 4.876s CPU time, 76M memory peak, 27.1M read from disk. May 8 00:31:47.489055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326-rootfs.mount: Deactivated successfully. May 8 00:31:47.505587 containerd[1506]: time="2025-05-08T00:31:47.505381122Z" level=info msg="shim disconnected" id=a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326 namespace=k8s.io May 8 00:31:47.506397 containerd[1506]: time="2025-05-08T00:31:47.505619258Z" level=warning msg="cleaning up after shim disconnected" id=a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326 namespace=k8s.io May 8 00:31:47.506397 containerd[1506]: time="2025-05-08T00:31:47.505646459Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:47.621616 kubelet[2806]: I0508 00:31:47.621187 2806 scope.go:117] "RemoveContainer" containerID="a58cb1a206281676e8627edd0fcb5b7d0e604db961f93aafae4d3a43a187c326" May 8 00:31:47.627817 containerd[1506]: time="2025-05-08T00:31:47.627736916Z" level=info msg="CreateContainer within sandbox \"bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 8 00:31:47.648778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029287922.mount: Deactivated successfully. May 8 00:31:47.652397 containerd[1506]: time="2025-05-08T00:31:47.652330047Z" level=info msg="CreateContainer within sandbox \"bcf8445b6813fcb1b4be7ac1c9521e2f3cbbaf6967d48e1f893de27bc31dcf16\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca\"" May 8 00:31:47.653313 containerd[1506]: time="2025-05-08T00:31:47.652980935Z" level=info msg="StartContainer for \"182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca\"" May 8 00:31:47.690648 systemd[1]: Started cri-containerd-182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca.scope - libcontainer container 182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca. May 8 00:31:47.731056 containerd[1506]: time="2025-05-08T00:31:47.730720691Z" level=info msg="StartContainer for \"182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca\" returns successfully" May 8 00:31:48.484936 systemd[1]: run-containerd-runc-k8s.io-182a35d59ca6740c3a4ab5f0a651e0eb0367482a736a3ac70b11bf10a3b770ca-runc.DjpCJb.mount: Deactivated successfully. May 8 00:31:49.662489 kubelet[2806]: I0508 00:31:49.662431 2806 status_manager.go:890] "Failed to get status for pod" podUID="c73c8e5d5bb8629cab27ff61146417a2" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-afb3d0be43" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58062->10.0.0.2:2379: read: connection timed out" May 8 00:31:50.768355 kubelet[2806]: E0508 00:31:50.752005 2806 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57968->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-1-n-afb3d0be43.183d65ed49dd238d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-1-n-afb3d0be43,UID:db364e340ed2cf006efe909f2b05da70,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-afb3d0be43,},FirstTimestamp:2025-05-08 00:31:40.310504333 +0000 UTC m=+358.559391716,LastTimestamp:2025-05-08 00:31:40.310504333 +0000 UTC m=+358.559391716,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-afb3d0be43,}" May 8 00:31:51.655986 systemd[1]: cri-containerd-0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675.scope: Deactivated successfully. May 8 00:31:51.656546 systemd[1]: cri-containerd-0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675.scope: Consumed 2.934s CPU time, 33.8M memory peak, 15.4M read from disk. May 8 00:31:51.677165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675-rootfs.mount: Deactivated successfully. May 8 00:31:51.684603 containerd[1506]: time="2025-05-08T00:31:51.684542178Z" level=info msg="shim disconnected" id=0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675 namespace=k8s.io May 8 00:31:51.685023 containerd[1506]: time="2025-05-08T00:31:51.684970169Z" level=warning msg="cleaning up after shim disconnected" id=0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675 namespace=k8s.io May 8 00:31:51.685023 containerd[1506]: time="2025-05-08T00:31:51.684995396Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:31:52.633599 kubelet[2806]: I0508 00:31:52.633563 2806 scope.go:117] "RemoveContainer" containerID="0eb26fb2a34efb999ed4c44b74ee8b94e41df25d21b0e82339d0f8df286f1675" May 8 00:31:52.635673 containerd[1506]: time="2025-05-08T00:31:52.635628352Z" level=info msg="CreateContainer within sandbox \"03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 8 00:31:52.648021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088962268.mount: Deactivated successfully. May 8 00:31:52.649156 containerd[1506]: time="2025-05-08T00:31:52.649012382Z" level=info msg="CreateContainer within sandbox \"03b7c4eff8fc04cbb1dcf20f1c9fc29bd83d632222404d0cd07b50879e221dbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0b222ae814ed3b7818e8b4dfcc752fcad3e40837abd730061bd3327c5e3fc09d\"" May 8 00:31:52.649496 containerd[1506]: time="2025-05-08T00:31:52.649474779Z" level=info msg="StartContainer for \"0b222ae814ed3b7818e8b4dfcc752fcad3e40837abd730061bd3327c5e3fc09d\"" May 8 00:31:52.680608 systemd[1]: Started cri-containerd-0b222ae814ed3b7818e8b4dfcc752fcad3e40837abd730061bd3327c5e3fc09d.scope - libcontainer container 0b222ae814ed3b7818e8b4dfcc752fcad3e40837abd730061bd3327c5e3fc09d. May 8 00:31:52.712587 containerd[1506]: time="2025-05-08T00:31:52.712531739Z" level=info msg="StartContainer for \"0b222ae814ed3b7818e8b4dfcc752fcad3e40837abd730061bd3327c5e3fc09d\" returns successfully"