Sep 4 23:47:44.940719 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:47:44.940740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:47:44.940749 kernel: BIOS-provided physical RAM map: Sep 4 23:47:44.940754 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 23:47:44.940759 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 23:47:44.940764 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 23:47:44.940769 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 4 23:47:44.940774 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 4 23:47:44.940780 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 23:47:44.940785 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 23:47:44.940790 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:47:44.940794 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 23:47:44.940799 kernel: NX (Execute Disable) protection: active Sep 4 23:47:44.940804 kernel: APIC: Static calls initialized Sep 4 23:47:44.940811 kernel: SMBIOS 2.8 present. Sep 4 23:47:44.940817 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 4 23:47:44.940822 kernel: Hypervisor detected: KVM Sep 4 23:47:44.940827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:47:44.940832 kernel: kvm-clock: using sched offset of 3171160120 cycles Sep 4 23:47:44.940837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:47:44.940843 kernel: tsc: Detected 2445.406 MHz processor Sep 4 23:47:44.940848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:47:44.940854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:47:44.940860 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 4 23:47:44.940865 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 23:47:44.940871 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:47:44.940876 kernel: Using GB pages for direct mapping Sep 4 23:47:44.940881 kernel: ACPI: Early table checksum verification disabled Sep 4 23:47:44.940886 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 4 23:47:44.940891 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940897 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940902 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940908 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 4 23:47:44.940913 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940919 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940924 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940929 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:47:44.940935 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 4 23:47:44.940940 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 4 23:47:44.940945 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 4 23:47:44.940954 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 4 23:47:44.940959 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 4 23:47:44.940965 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 4 23:47:44.940970 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 4 23:47:44.940976 kernel: No NUMA configuration found Sep 4 23:47:44.940981 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 4 23:47:44.940987 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 4 23:47:44.940993 kernel: Zone ranges: Sep 4 23:47:44.940999 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:47:44.941004 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 4 23:47:44.941010 kernel: Normal empty Sep 4 23:47:44.941015 kernel: Movable zone start for each node Sep 4 23:47:44.941021 kernel: Early memory node ranges Sep 4 23:47:44.941026 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 23:47:44.941032 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 4 23:47:44.941037 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 4 23:47:44.941044 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:47:44.941049 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 23:47:44.941054 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 23:47:44.941060 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:47:44.941065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:47:44.941071 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:47:44.941076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:47:44.941082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:47:44.941087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:47:44.941094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:47:44.941099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:47:44.941105 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:47:44.941110 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:47:44.941116 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 23:47:44.941121 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:47:44.941127 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 23:47:44.941132 kernel: Booting paravirtualized kernel on KVM Sep 4 23:47:44.941138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:47:44.941144 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 23:47:44.941150 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 4 23:47:44.941155 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 4 23:47:44.941161 kernel: pcpu-alloc: [0] 0 1 Sep 4 23:47:44.941166 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 23:47:44.941173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:47:44.941179 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:47:44.941187 kernel: random: crng init done Sep 4 23:47:44.941196 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:47:44.941209 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 23:47:44.941220 kernel: Fallback order for Node 0: 0 Sep 4 23:47:44.941231 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 4 23:47:44.941242 kernel: Policy zone: DMA32 Sep 4 23:47:44.941252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:47:44.943909 kernel: Memory: 1920008K/2047464K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 127196K reserved, 0K cma-reserved) Sep 4 23:47:44.943917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:47:44.943923 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:47:44.943929 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:47:44.943938 kernel: Dynamic Preempt: voluntary Sep 4 23:47:44.943944 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:47:44.943950 kernel: rcu: RCU event tracing is enabled. Sep 4 23:47:44.943956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:47:44.943962 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:47:44.943967 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:47:44.943974 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:47:44.943979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:47:44.943985 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:47:44.943992 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 23:47:44.943998 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:47:44.944003 kernel: Console: colour VGA+ 80x25 Sep 4 23:47:44.944009 kernel: printk: console [tty0] enabled Sep 4 23:47:44.944015 kernel: printk: console [ttyS0] enabled Sep 4 23:47:44.944020 kernel: ACPI: Core revision 20230628 Sep 4 23:47:44.944026 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:47:44.944032 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:47:44.944037 kernel: x2apic enabled Sep 4 23:47:44.944044 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:47:44.944050 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:47:44.944056 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 23:47:44.944061 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Sep 4 23:47:44.944067 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 23:47:44.944073 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 23:47:44.944078 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 23:47:44.944084 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:47:44.944095 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:47:44.944101 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:47:44.944107 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 23:47:44.944113 kernel: active return thunk: retbleed_return_thunk Sep 4 23:47:44.944120 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 23:47:44.944126 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:47:44.944132 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:47:44.944138 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:47:44.944144 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:47:44.944151 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:47:44.944157 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:47:44.944163 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 23:47:44.944169 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:47:44.944175 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:47:44.944180 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:47:44.944186 kernel: landlock: Up and running. Sep 4 23:47:44.944192 kernel: SELinux: Initializing. Sep 4 23:47:44.944198 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:47:44.944205 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:47:44.944211 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 23:47:44.944217 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:47:44.944223 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:47:44.944229 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:47:44.944235 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 23:47:44.944241 kernel: ... version: 0 Sep 4 23:47:44.944247 kernel: ... bit width: 48 Sep 4 23:47:44.944269 kernel: ... generic registers: 6 Sep 4 23:47:44.944276 kernel: ... value mask: 0000ffffffffffff Sep 4 23:47:44.944292 kernel: ... max period: 00007fffffffffff Sep 4 23:47:44.944305 kernel: ... fixed-purpose events: 0 Sep 4 23:47:44.944311 kernel: ... event mask: 000000000000003f Sep 4 23:47:44.944317 kernel: signal: max sigframe size: 1776 Sep 4 23:47:44.944322 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:47:44.944328 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:47:44.944334 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:47:44.944340 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:47:44.944348 kernel: .... node #0, CPUs: #1 Sep 4 23:47:44.944354 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:47:44.944360 kernel: smpboot: Max logical packages: 1 Sep 4 23:47:44.944366 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Sep 4 23:47:44.944372 kernel: devtmpfs: initialized Sep 4 23:47:44.944377 kernel: x86/mm: Memory block size: 128MB Sep 4 23:47:44.944383 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:47:44.944389 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:47:44.944395 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:47:44.944402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:47:44.944408 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:47:44.944414 kernel: audit: type=2000 audit(1757029664.308:1): state=initialized audit_enabled=0 res=1 Sep 4 23:47:44.944420 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:47:44.944426 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:47:44.944432 kernel: cpuidle: using governor menu Sep 4 23:47:44.944437 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:47:44.944443 kernel: dca service started, version 1.12.1 Sep 4 23:47:44.944449 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 4 23:47:44.944457 kernel: PCI: Using configuration type 1 for base access Sep 4 23:47:44.944478 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:47:44.944485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:47:44.944491 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:47:44.944497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:47:44.944503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:47:44.944509 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:47:44.944515 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:47:44.944521 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:47:44.944529 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:47:44.944534 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:47:44.944540 kernel: ACPI: Interpreter enabled Sep 4 23:47:44.944546 kernel: ACPI: PM: (supports S0 S5) Sep 4 23:47:44.944552 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:47:44.944558 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:47:44.944564 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:47:44.944569 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 23:47:44.944575 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:47:44.944712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:47:44.944787 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 23:47:44.944855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 23:47:44.944864 kernel: PCI host bridge to bus 0000:00 Sep 4 23:47:44.944932 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:47:44.944997 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:47:44.945059 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:47:44.945115 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 4 23:47:44.945173 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 23:47:44.945230 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 23:47:44.946120 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:47:44.948069 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 4 23:47:44.948164 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 4 23:47:44.948242 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 4 23:47:44.948344 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 4 23:47:44.948412 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 4 23:47:44.948496 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 4 23:47:44.948562 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:47:44.948650 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.948723 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 4 23:47:44.948869 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.948988 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 4 23:47:44.949072 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.949140 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 4 23:47:44.949210 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.949519 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 4 23:47:44.949629 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.949752 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 4 23:47:44.949907 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.949980 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 4 23:47:44.950050 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.950120 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 4 23:47:44.950188 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.950250 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 4 23:47:44.950986 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 4 23:47:44.951056 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 4 23:47:44.951126 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 4 23:47:44.951191 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 23:47:44.952483 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 4 23:47:44.952570 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 4 23:47:44.952636 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 4 23:47:44.952704 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 4 23:47:44.952768 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 4 23:47:44.952849 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 4 23:47:44.952928 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 4 23:47:44.952995 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 4 23:47:44.953064 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 4 23:47:44.953135 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 4 23:47:44.954302 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 4 23:47:44.954386 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 4 23:47:44.954541 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 4 23:47:44.954654 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 4 23:47:44.954726 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 4 23:47:44.954790 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 4 23:47:44.954852 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 4 23:47:44.954926 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 4 23:47:44.954993 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 4 23:47:44.955063 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 4 23:47:44.955129 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 4 23:47:44.955196 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 4 23:47:44.955446 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 4 23:47:44.957349 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 4 23:47:44.957430 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 4 23:47:44.957521 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 4 23:47:44.957589 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 4 23:47:44.957659 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 4 23:47:44.957734 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 4 23:47:44.957803 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 4 23:47:44.957900 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 4 23:47:44.958045 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 4 23:47:44.958122 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 4 23:47:44.958196 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 4 23:47:44.958344 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 4 23:47:44.958417 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 4 23:47:44.958500 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 4 23:47:44.958565 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 4 23:47:44.958626 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 4 23:47:44.958635 kernel: acpiphp: Slot [0] registered Sep 4 23:47:44.958735 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 4 23:47:44.958804 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 4 23:47:44.958908 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 4 23:47:44.958976 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 4 23:47:44.959041 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 4 23:47:44.959104 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 4 23:47:44.959167 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 4 23:47:44.959176 kernel: acpiphp: Slot [0-2] registered Sep 4 23:47:44.959239 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 4 23:47:44.959321 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 4 23:47:44.959391 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 4 23:47:44.959400 kernel: acpiphp: Slot [0-3] registered Sep 4 23:47:44.959462 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 4 23:47:44.959544 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 4 23:47:44.959607 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 4 23:47:44.959616 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:47:44.959622 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:47:44.959628 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:47:44.959637 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:47:44.959643 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 23:47:44.959649 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 23:47:44.959655 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 23:47:44.959661 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 23:47:44.959668 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 23:47:44.959674 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 23:47:44.959680 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 23:47:44.959685 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 23:47:44.959693 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 23:47:44.959698 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 23:47:44.959704 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 23:47:44.959710 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 23:47:44.959716 kernel: iommu: Default domain type: Translated Sep 4 23:47:44.959722 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:47:44.959728 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:47:44.959734 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:47:44.959740 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 23:47:44.959747 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 4 23:47:44.959855 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 23:47:44.959984 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 23:47:44.960111 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:47:44.960129 kernel: vgaarb: loaded Sep 4 23:47:44.960140 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:47:44.960151 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:47:44.960163 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:47:44.960174 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:47:44.960185 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:47:44.960191 kernel: pnp: PnP ACPI init Sep 4 23:47:44.962010 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 23:47:44.962030 kernel: pnp: PnP ACPI: found 5 devices Sep 4 23:47:44.962043 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:47:44.962054 kernel: NET: Registered PF_INET protocol family Sep 4 23:47:44.962066 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:47:44.962076 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 23:47:44.962087 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:47:44.962093 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 23:47:44.962099 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 23:47:44.962105 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 23:47:44.962111 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:47:44.962117 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:47:44.962124 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:47:44.962132 kernel: NET: Registered PF_XDP protocol family Sep 4 23:47:44.962330 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 4 23:47:44.962459 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 4 23:47:44.962555 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 4 23:47:44.962624 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 4 23:47:44.962692 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 4 23:47:44.962756 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 4 23:47:44.962820 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 4 23:47:44.962887 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 4 23:47:44.962949 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 4 23:47:44.963012 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 4 23:47:44.963073 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 4 23:47:44.963134 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 4 23:47:44.963201 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 4 23:47:44.963281 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 4 23:47:44.963348 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 4 23:47:44.963411 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 4 23:47:44.963494 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 4 23:47:44.963560 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 4 23:47:44.963622 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 4 23:47:44.963685 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 4 23:47:44.963746 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 4 23:47:44.963808 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 4 23:47:44.963874 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 4 23:47:44.963948 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 4 23:47:44.964013 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 4 23:47:44.964074 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 4 23:47:44.964136 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 4 23:47:44.964197 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 4 23:47:44.964275 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 4 23:47:44.964341 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 4 23:47:44.964403 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 4 23:47:44.964478 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 4 23:47:44.964546 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 4 23:47:44.964614 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 4 23:47:44.964678 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 4 23:47:44.964757 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 4 23:47:44.964862 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:47:44.964948 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:47:44.965036 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:47:44.965120 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 4 23:47:44.965203 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 23:47:44.965304 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 23:47:44.965387 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 4 23:47:44.965453 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 4 23:47:44.965534 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 4 23:47:44.965594 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 4 23:47:44.965659 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 4 23:47:44.965717 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 4 23:47:44.965781 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 4 23:47:44.965844 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 4 23:47:44.965907 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 4 23:47:44.965965 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 4 23:47:44.966027 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 4 23:47:44.966084 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 4 23:47:44.966145 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 4 23:47:44.966207 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 4 23:47:44.966281 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 4 23:47:44.966375 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 4 23:47:44.966448 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 4 23:47:44.966521 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 4 23:47:44.966587 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 4 23:47:44.966645 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 4 23:47:44.966751 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 4 23:47:44.966767 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 23:47:44.966778 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:47:44.966788 kernel: Initialise system trusted keyrings Sep 4 23:47:44.966798 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 23:47:44.966808 kernel: Key type asymmetric registered Sep 4 23:47:44.966819 kernel: Asymmetric key parser 'x509' registered Sep 4 23:47:44.966829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:47:44.966842 kernel: io scheduler mq-deadline registered Sep 4 23:47:44.966852 kernel: io scheduler kyber registered Sep 4 23:47:44.966862 kernel: io scheduler bfq registered Sep 4 23:47:44.966958 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 4 23:47:44.967053 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 4 23:47:44.967147 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 4 23:47:44.967213 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 4 23:47:44.967301 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 4 23:47:44.967534 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 4 23:47:44.967774 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 4 23:47:44.967928 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 4 23:47:44.968018 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 4 23:47:44.968160 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 4 23:47:44.969371 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 4 23:47:44.969459 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 4 23:47:44.969563 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 4 23:47:44.969633 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 4 23:47:44.969706 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 4 23:47:44.969771 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 4 23:47:44.969781 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 23:47:44.969846 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 4 23:47:44.969911 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 4 23:47:44.969920 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:47:44.969927 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 4 23:47:44.969933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:47:44.969940 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:47:44.969949 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:47:44.969956 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:47:44.969962 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:47:44.969968 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:47:44.970088 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 4 23:47:44.970168 kernel: rtc_cmos 00:03: registered as rtc0 Sep 4 23:47:44.970230 kernel: rtc_cmos 00:03: setting system clock to 2025-09-04T23:47:44 UTC (1757029664) Sep 4 23:47:44.970334 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 23:47:44.970349 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 23:47:44.970357 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:47:44.970363 kernel: Segment Routing with IPv6 Sep 4 23:47:44.970370 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:47:44.970376 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:47:44.970382 kernel: Key type dns_resolver registered Sep 4 23:47:44.970389 kernel: IPI shorthand broadcast: enabled Sep 4 23:47:44.970395 kernel: sched_clock: Marking stable (1076007151, 134917996)->(1218702781, -7777634) Sep 4 23:47:44.970405 kernel: registered taskstats version 1 Sep 4 23:47:44.970411 kernel: Loading compiled-in X.509 certificates Sep 4 23:47:44.970417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:47:44.970423 kernel: Key type .fscrypt registered Sep 4 23:47:44.970434 kernel: Key type fscrypt-provisioning registered Sep 4 23:47:44.970446 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:47:44.970458 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:47:44.970482 kernel: ima: No architecture policies found Sep 4 23:47:44.970493 kernel: clk: Disabling unused clocks Sep 4 23:47:44.970508 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:47:44.970520 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:47:44.970532 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:47:44.970540 kernel: Run /init as init process Sep 4 23:47:44.970547 kernel: with arguments: Sep 4 23:47:44.970555 kernel: /init Sep 4 23:47:44.970561 kernel: with environment: Sep 4 23:47:44.970567 kernel: HOME=/ Sep 4 23:47:44.970573 kernel: TERM=linux Sep 4 23:47:44.970581 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:47:44.970588 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:47:44.970598 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:47:44.970609 systemd[1]: Detected virtualization kvm. Sep 4 23:47:44.970621 systemd[1]: Detected architecture x86-64. Sep 4 23:47:44.970634 systemd[1]: Running in initrd. Sep 4 23:47:44.970645 systemd[1]: No hostname configured, using default hostname. Sep 4 23:47:44.970656 systemd[1]: Hostname set to . Sep 4 23:47:44.970663 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:47:44.970670 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:47:44.970677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:47:44.970684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:47:44.970692 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:47:44.970702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:47:44.970715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:47:44.970731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:47:44.970741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:47:44.970749 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:47:44.970755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:47:44.970762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:47:44.970769 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:47:44.970776 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:47:44.970784 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:47:44.970790 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:47:44.970797 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:47:44.970804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:47:44.970811 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:47:44.970817 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:47:44.970824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:47:44.970831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:47:44.970837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:47:44.970845 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:47:44.970852 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:47:44.970859 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:47:44.970866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:47:44.970872 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:47:44.970879 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:47:44.970886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:47:44.970892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:44.970923 systemd-journald[187]: Collecting audit messages is disabled. Sep 4 23:47:44.970944 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:47:44.970951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:47:44.970960 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:47:44.970967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:47:44.970975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:47:44.970988 systemd-journald[187]: Journal started Sep 4 23:47:44.971017 systemd-journald[187]: Runtime Journal (/run/log/journal/b3e50a01f90644059849e3d0aacb1fbf) is 4.8M, max 38.3M, 33.5M free. Sep 4 23:47:44.936137 systemd-modules-load[188]: Inserted module 'overlay' Sep 4 23:47:45.003445 kernel: Bridge firewalling registered Sep 4 23:47:44.972112 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 4 23:47:45.013295 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:47:45.013875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:47:45.014607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:45.015420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:47:45.021397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:47:45.024360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:47:45.025226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:47:45.029701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:47:45.035168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:47:45.037486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:47:45.041424 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:47:45.042479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:47:45.044786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:47:45.051390 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:47:45.060825 dracut-cmdline[221]: dracut-dracut-053 Sep 4 23:47:45.065056 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:47:45.074356 systemd-resolved[224]: Positive Trust Anchors: Sep 4 23:47:45.074934 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:47:45.075403 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:47:45.083116 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 4 23:47:45.084043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:47:45.084849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:47:45.119293 kernel: SCSI subsystem initialized Sep 4 23:47:45.126285 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:47:45.138281 kernel: iscsi: registered transport (tcp) Sep 4 23:47:45.154292 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:47:45.154334 kernel: QLogic iSCSI HBA Driver Sep 4 23:47:45.180199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:47:45.189390 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:47:45.208541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:47:45.208583 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:47:45.208593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:47:45.246324 kernel: raid6: avx2x4 gen() 28896 MB/s Sep 4 23:47:45.263291 kernel: raid6: avx2x2 gen() 31527 MB/s Sep 4 23:47:45.280427 kernel: raid6: avx2x1 gen() 22111 MB/s Sep 4 23:47:45.280486 kernel: raid6: using algorithm avx2x2 gen() 31527 MB/s Sep 4 23:47:45.298481 kernel: raid6: .... xor() 31116 MB/s, rmw enabled Sep 4 23:47:45.298504 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:47:45.316298 kernel: xor: automatically using best checksumming function avx Sep 4 23:47:45.425308 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:47:45.435048 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:47:45.440410 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:47:45.453717 systemd-udevd[407]: Using default interface naming scheme 'v255'. Sep 4 23:47:45.457067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:47:45.464408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:47:45.475060 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Sep 4 23:47:45.495942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:47:45.501391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:47:45.541685 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:47:45.550421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:47:45.564736 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:47:45.571667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:47:45.572900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:47:45.575170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:47:45.583433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:47:45.598546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:47:45.630014 kernel: ACPI: bus type USB registered Sep 4 23:47:45.630070 kernel: usbcore: registered new interface driver usbfs Sep 4 23:47:45.631435 kernel: usbcore: registered new interface driver hub Sep 4 23:47:45.632755 kernel: usbcore: registered new device driver usb Sep 4 23:47:45.648296 kernel: scsi host0: Virtio SCSI HBA Sep 4 23:47:45.654297 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 4 23:47:45.654497 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 4 23:47:45.656547 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 4 23:47:45.656661 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:47:45.658828 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 4 23:47:45.660550 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 4 23:47:45.662308 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 4 23:47:45.666623 kernel: hub 1-0:1.0: USB hub found Sep 4 23:47:45.666765 kernel: hub 1-0:1.0: 4 ports detected Sep 4 23:47:45.668641 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 4 23:47:45.673287 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 4 23:47:45.680285 kernel: hub 2-0:1.0: USB hub found Sep 4 23:47:45.680686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:47:45.681559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:47:45.683217 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:47:45.684924 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:47:45.685152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:45.688240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:45.716503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:45.718344 kernel: libata version 3.00 loaded. Sep 4 23:47:45.722512 kernel: hub 2-0:1.0: 4 ports detected Sep 4 23:47:45.753965 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:47:45.754014 kernel: AES CTR mode by8 optimization enabled Sep 4 23:47:45.754024 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 23:47:45.754152 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 23:47:45.756289 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 4 23:47:45.756444 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 23:47:45.761312 kernel: scsi host1: ahci Sep 4 23:47:45.761452 kernel: scsi host2: ahci Sep 4 23:47:45.762600 kernel: scsi host3: ahci Sep 4 23:47:45.764269 kernel: scsi host4: ahci Sep 4 23:47:45.764375 kernel: scsi host5: ahci Sep 4 23:47:45.765285 kernel: scsi host6: ahci Sep 4 23:47:45.765393 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Sep 4 23:47:45.765403 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Sep 4 23:47:45.765415 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Sep 4 23:47:45.765422 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Sep 4 23:47:45.765429 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Sep 4 23:47:45.765436 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Sep 4 23:47:45.769289 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 4 23:47:45.770276 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 4 23:47:45.770435 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 23:47:45.770592 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 4 23:47:45.770718 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 23:47:45.775280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:47:45.775307 kernel: GPT:17805311 != 80003071 Sep 4 23:47:45.775327 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:47:45.775342 kernel: GPT:17805311 != 80003071 Sep 4 23:47:45.775354 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:47:45.775369 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:47:45.775382 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 23:47:45.814490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:45.820446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:47:45.836040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:47:45.928308 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 4 23:47:46.067288 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:47:46.078268 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 23:47:46.078320 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 23:47:46.082804 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 23:47:46.082878 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 23:47:46.082910 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 23:47:46.087200 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 4 23:47:46.087248 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 23:47:46.089651 kernel: ata1.00: applying bridge limits Sep 4 23:47:46.089696 kernel: ata1.00: configured for UDMA/100 Sep 4 23:47:46.091447 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:47:46.130704 kernel: usbcore: registered new interface driver usbhid Sep 4 23:47:46.130745 kernel: usbhid: USB HID core driver Sep 4 23:47:46.138271 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 4 23:47:46.142285 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 4 23:47:46.166059 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 23:47:46.166209 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:47:46.177300 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (466) Sep 4 23:47:46.182320 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (454) Sep 4 23:47:46.182386 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:47:46.199739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 4 23:47:46.214744 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 4 23:47:46.229715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 4 23:47:46.230772 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 4 23:47:46.246251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 4 23:47:46.254429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:47:46.261490 disk-uuid[579]: Primary Header is updated. Sep 4 23:47:46.261490 disk-uuid[579]: Secondary Entries is updated. Sep 4 23:47:46.261490 disk-uuid[579]: Secondary Header is updated. Sep 4 23:47:46.273325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:47:47.289327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:47:47.291743 disk-uuid[580]: The operation has completed successfully. Sep 4 23:47:47.338942 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:47:47.339041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:47:47.372348 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:47:47.374824 sh[596]: Success Sep 4 23:47:47.386289 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 23:47:47.431515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:47:47.448720 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:47:47.449449 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:47:47.468202 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:47:47.468233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:47:47.468247 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:47:47.471922 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:47:47.471942 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:47:47.481286 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:47:47.482838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:47:47.483817 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:47:47.500538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:47:47.503438 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:47:47.522204 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:47:47.522241 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:47:47.522280 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:47:47.526647 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:47:47.526674 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:47:47.535294 kernel: BTRFS info (device sda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:47:47.537460 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:47:47.545368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:47:47.603650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:47:47.612371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:47:47.615553 ignition[702]: Ignition 2.20.0 Sep 4 23:47:47.615563 ignition[702]: Stage: fetch-offline Sep 4 23:47:47.615586 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:47.615593 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:47.615652 ignition[702]: parsed url from cmdline: "" Sep 4 23:47:47.615654 ignition[702]: no config URL provided Sep 4 23:47:47.615657 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:47:47.615664 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:47:47.615667 ignition[702]: failed to fetch config: resource requires networking Sep 4 23:47:47.615802 ignition[702]: Ignition finished successfully Sep 4 23:47:47.619883 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:47:47.630568 systemd-networkd[779]: lo: Link UP Sep 4 23:47:47.630576 systemd-networkd[779]: lo: Gained carrier Sep 4 23:47:47.632105 systemd-networkd[779]: Enumeration completed Sep 4 23:47:47.632236 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:47:47.632690 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:47.632694 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:47:47.633498 systemd[1]: Reached target network.target - Network. Sep 4 23:47:47.634064 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:47.634068 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:47:47.634601 systemd-networkd[779]: eth0: Link UP Sep 4 23:47:47.634604 systemd-networkd[779]: eth0: Gained carrier Sep 4 23:47:47.634611 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:47.638667 systemd-networkd[779]: eth1: Link UP Sep 4 23:47:47.638672 systemd-networkd[779]: eth1: Gained carrier Sep 4 23:47:47.638680 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:47.642381 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:47:47.653518 ignition[784]: Ignition 2.20.0 Sep 4 23:47:47.653531 ignition[784]: Stage: fetch Sep 4 23:47:47.653714 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:47.653726 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:47.653817 ignition[784]: parsed url from cmdline: "" Sep 4 23:47:47.653822 ignition[784]: no config URL provided Sep 4 23:47:47.653828 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:47:47.653837 ignition[784]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:47:47.653860 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 4 23:47:47.654184 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 4 23:47:47.665305 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 4 23:47:47.700301 systemd-networkd[779]: eth0: DHCPv4 address 46.62.204.39/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 4 23:47:47.855195 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 4 23:47:47.861555 ignition[784]: GET result: OK Sep 4 23:47:47.861639 ignition[784]: parsing config with SHA512: 6066d1f2082697ca7faa800017754a495d1b1a07a18b327e4aa881c0cd98dd635d753b6cc0ee24f57f98469820c5dc19cf819f2646cc4e04c9ab0f0922ec8fcb Sep 4 23:47:47.866410 unknown[784]: fetched base config from "system" Sep 4 23:47:47.866424 unknown[784]: fetched base config from "system" Sep 4 23:47:47.867282 ignition[784]: fetch: fetch complete Sep 4 23:47:47.866433 unknown[784]: fetched user config from "hetzner" Sep 4 23:47:47.867292 ignition[784]: fetch: fetch passed Sep 4 23:47:47.868966 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:47:47.867352 ignition[784]: Ignition finished successfully Sep 4 23:47:47.875461 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:47:47.891665 ignition[792]: Ignition 2.20.0 Sep 4 23:47:47.891681 ignition[792]: Stage: kargs Sep 4 23:47:47.891920 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:47.891934 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:47.895005 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:47:47.893298 ignition[792]: kargs: kargs passed Sep 4 23:47:47.893353 ignition[792]: Ignition finished successfully Sep 4 23:47:47.904465 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:47:47.917182 ignition[798]: Ignition 2.20.0 Sep 4 23:47:47.917982 ignition[798]: Stage: disks Sep 4 23:47:47.918237 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:47.918274 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:47.919745 ignition[798]: disks: disks passed Sep 4 23:47:47.922525 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:47:47.919794 ignition[798]: Ignition finished successfully Sep 4 23:47:47.925965 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:47:47.926715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:47:47.927251 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:47:47.929028 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:47:47.929699 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:47:47.940419 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:47:47.954230 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 23:47:47.956693 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:47:47.961355 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:47:48.025309 kernel: EXT4-fs (sda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:47:48.025853 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:47:48.026605 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:47:48.031313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:47:48.034326 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:47:48.035996 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:47:48.038785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:47:48.039520 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:47:48.045636 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (815) Sep 4 23:47:48.045654 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:47:48.046796 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:47:48.051377 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:47:48.051395 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:47:48.058551 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:47:48.058586 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:47:48.060684 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:47:48.068909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:47:48.096354 coreos-metadata[817]: Sep 04 23:47:48.096 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 4 23:47:48.097411 coreos-metadata[817]: Sep 04 23:47:48.097 INFO Fetch successful Sep 4 23:47:48.098365 coreos-metadata[817]: Sep 04 23:47:48.098 INFO wrote hostname ci-4230-2-2-n-de0727ed16 to /sysroot/etc/hostname Sep 4 23:47:48.100653 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:47:48.102793 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:47:48.105465 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:47:48.109453 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:47:48.113000 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:47:48.173866 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:47:48.183359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:47:48.185836 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:47:48.193368 kernel: BTRFS info (device sda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:47:48.207817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:47:48.210454 ignition[932]: INFO : Ignition 2.20.0 Sep 4 23:47:48.210454 ignition[932]: INFO : Stage: mount Sep 4 23:47:48.211571 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:48.211571 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:48.211571 ignition[932]: INFO : mount: mount passed Sep 4 23:47:48.211571 ignition[932]: INFO : Ignition finished successfully Sep 4 23:47:48.212854 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:47:48.220365 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:47:48.464945 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:47:48.470450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:47:48.480282 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (944) Sep 4 23:47:48.480313 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:47:48.482529 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:47:48.485046 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:47:48.490740 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:47:48.490766 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:47:48.492984 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:47:48.514228 ignition[960]: INFO : Ignition 2.20.0 Sep 4 23:47:48.514228 ignition[960]: INFO : Stage: files Sep 4 23:47:48.515613 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:48.515613 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:48.515613 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:47:48.518036 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:47:48.518036 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:47:48.520788 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:47:48.521685 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:47:48.521685 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:47:48.521167 unknown[960]: wrote ssh authorized keys file for user: core Sep 4 23:47:48.524112 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:47:48.524112 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:47:48.783610 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:47:49.220343 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:47:49.220343 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:47:49.222125 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:47:49.447508 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:47:49.460462 systemd-networkd[779]: eth1: Gained IPv6LL Sep 4 23:47:49.484599 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:47:49.484599 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:47:49.487392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:47:49.652529 systemd-networkd[779]: eth0: Gained IPv6LL Sep 4 23:47:49.855197 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:47:50.034855 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:47:50.034855 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:47:50.037335 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:47:50.037335 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:47:50.037335 ignition[960]: INFO : files: files passed Sep 4 23:47:50.037335 ignition[960]: INFO : Ignition finished successfully Sep 4 23:47:50.039668 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:47:50.046973 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:47:50.053449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:47:50.054588 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:47:50.054681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:47:50.065871 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:47:50.065871 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:47:50.068571 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:47:50.070031 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:47:50.071544 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:47:50.078435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:47:50.106802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:47:50.106900 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:47:50.108845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:47:50.110688 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:47:50.111376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:47:50.121382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:47:50.130973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:47:50.135388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:47:50.145069 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:47:50.146293 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:47:50.146858 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:47:50.147333 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:47:50.147429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:47:50.148068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:47:50.148658 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:47:50.149770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:47:50.150721 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:47:50.151740 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:47:50.152944 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:47:50.153963 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:47:50.154945 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:47:50.156053 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:47:50.157187 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:47:50.158296 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:47:50.158378 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:47:50.160138 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:47:50.161049 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:47:50.162163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:47:50.162251 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:47:50.163269 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:47:50.163387 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:47:50.164742 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:47:50.164884 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:47:50.166158 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:47:50.166292 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:47:50.167363 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:47:50.167502 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:47:50.185511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:47:50.188440 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:47:50.188944 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:47:50.189102 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:47:50.190348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:47:50.191376 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:47:50.199146 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:47:50.199218 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:47:50.207325 ignition[1014]: INFO : Ignition 2.20.0 Sep 4 23:47:50.207325 ignition[1014]: INFO : Stage: umount Sep 4 23:47:50.207325 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:47:50.207325 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:47:50.214980 ignition[1014]: INFO : umount: umount passed Sep 4 23:47:50.214980 ignition[1014]: INFO : Ignition finished successfully Sep 4 23:47:50.210594 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:47:50.210707 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:47:50.216917 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:47:50.217575 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:47:50.217638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:47:50.219424 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:47:50.219461 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:47:50.220479 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:47:50.220525 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:47:50.221473 systemd[1]: Stopped target network.target - Network. Sep 4 23:47:50.222451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:47:50.222507 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:47:50.223568 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:47:50.224506 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:47:50.228345 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:47:50.229451 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:47:50.229884 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:47:50.230774 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:47:50.230804 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:47:50.231628 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:47:50.231653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:47:50.232447 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:47:50.232480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:47:50.233335 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:47:50.233366 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:47:50.234345 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:47:50.235160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:47:50.236732 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:47:50.236795 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:47:50.237775 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:47:50.237828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:47:50.240821 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:47:50.240898 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:47:50.243334 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:47:50.243532 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:47:50.243563 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:47:50.245946 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:47:50.246124 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:47:50.246199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:47:50.248114 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:47:50.248370 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:47:50.248405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:47:50.263331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:47:50.263752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:47:50.263791 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:47:50.264349 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:47:50.264382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:47:50.265327 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:47:50.265358 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:47:50.266205 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:47:50.268212 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:47:50.273886 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:47:50.273964 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:47:50.279546 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:47:50.279645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:47:50.280800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:47:50.280841 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:47:50.281723 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:47:50.281748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:47:50.282725 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:47:50.282758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:47:50.284364 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:47:50.284401 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:47:50.285435 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:47:50.285475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:47:50.291431 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:47:50.291956 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:47:50.292005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:47:50.294723 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:47:50.295335 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:47:50.296001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:47:50.296043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:47:50.297594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:47:50.297655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:50.300325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:47:50.300391 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:47:50.301656 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:47:50.307393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:47:50.313921 systemd[1]: Switching root. Sep 4 23:47:50.348982 systemd-journald[187]: Journal stopped Sep 4 23:47:51.217557 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 4 23:47:51.217609 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:47:51.217620 kernel: SELinux: policy capability open_perms=1 Sep 4 23:47:51.217628 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:47:51.217637 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:47:51.217646 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:47:51.217654 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:47:51.217662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:47:51.217669 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:47:51.217677 kernel: audit: type=1403 audit(1757029670.488:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:47:51.217686 systemd[1]: Successfully loaded SELinux policy in 41.605ms. Sep 4 23:47:51.217701 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.091ms. Sep 4 23:47:51.217712 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:47:51.217721 systemd[1]: Detected virtualization kvm. Sep 4 23:47:51.217729 systemd[1]: Detected architecture x86-64. Sep 4 23:47:51.217737 systemd[1]: Detected first boot. Sep 4 23:47:51.217746 systemd[1]: Hostname set to . Sep 4 23:47:51.217760 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:47:51.217769 zram_generator::config[1062]: No configuration found. Sep 4 23:47:51.217778 kernel: Guest personality initialized and is inactive Sep 4 23:47:51.217787 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:47:51.217795 kernel: Initialized host personality Sep 4 23:47:51.217802 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:47:51.217810 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:47:51.217818 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:47:51.217827 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:47:51.217836 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:47:51.217848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:47:51.217856 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:47:51.217866 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:47:51.217875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:47:51.217883 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:47:51.217892 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:47:51.217900 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:47:51.217909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:47:51.217918 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:47:51.217926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:47:51.217935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:47:51.217945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:47:51.217953 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:47:51.217962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:47:51.217970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:47:51.217979 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:47:51.217987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:47:51.217997 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:47:51.218005 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:47:51.218013 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:47:51.218022 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:47:51.218031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:47:51.218039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:47:51.218048 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:47:51.218056 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:47:51.218065 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:47:51.218074 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:47:51.218083 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:47:51.218095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:47:51.218105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:47:51.218113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:47:51.218121 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:47:51.218131 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:47:51.218139 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:47:51.218148 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:47:51.218157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:51.218165 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:47:51.218174 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:47:51.218182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:47:51.218191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:47:51.218201 systemd[1]: Reached target machines.target - Containers. Sep 4 23:47:51.218210 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:47:51.218218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:47:51.218227 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:47:51.218235 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:47:51.218244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:47:51.218252 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:47:51.220414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:47:51.220426 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:47:51.220439 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:47:51.220448 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:47:51.220457 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:47:51.220466 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:47:51.220474 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:47:51.220483 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:47:51.220513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:47:51.220522 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:47:51.220533 kernel: loop: module loaded Sep 4 23:47:51.220542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:47:51.220551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:47:51.220559 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:47:51.220568 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:47:51.220577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:47:51.220585 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:47:51.220594 systemd[1]: Stopped verity-setup.service. Sep 4 23:47:51.220604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:51.220612 kernel: ACPI: bus type drm_connector registered Sep 4 23:47:51.220621 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:47:51.220630 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:47:51.220638 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:47:51.220649 kernel: fuse: init (API version 7.39) Sep 4 23:47:51.220657 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:47:51.220683 systemd-journald[1147]: Collecting audit messages is disabled. Sep 4 23:47:51.220703 systemd-journald[1147]: Journal started Sep 4 23:47:51.220723 systemd-journald[1147]: Runtime Journal (/run/log/journal/b3e50a01f90644059849e3d0aacb1fbf) is 4.8M, max 38.3M, 33.5M free. Sep 4 23:47:51.229250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:47:51.229326 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:47:51.229344 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:47:50.966452 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:47:50.976743 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:47:50.977096 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:47:51.232330 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:47:51.234034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:47:51.234723 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:47:51.234839 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:47:51.235596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:47:51.235705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:47:51.236399 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:47:51.236584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:47:51.237209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:47:51.237442 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:47:51.238114 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:47:51.238218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:47:51.239070 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:47:51.239175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:47:51.239900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:47:51.240647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:47:51.241383 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:47:51.247230 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:47:51.253522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:47:51.257167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:47:51.257951 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:47:51.257979 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:47:51.259319 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:47:51.266849 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:47:51.269160 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:47:51.270692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:47:51.275553 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:47:51.279061 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:47:51.279908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:47:51.280807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:47:51.281345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:47:51.286395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:47:51.288144 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:47:51.290540 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:47:51.294288 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:47:51.295026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:47:51.295898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:47:51.296795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:47:51.297723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:47:51.298746 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:47:51.307245 kernel: loop0: detected capacity change from 0 to 8 Sep 4 23:47:51.309224 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:47:51.311589 systemd-journald[1147]: Time spent on flushing to /var/log/journal/b3e50a01f90644059849e3d0aacb1fbf is 59.461ms for 1149 entries. Sep 4 23:47:51.311589 systemd-journald[1147]: System Journal (/var/log/journal/b3e50a01f90644059849e3d0aacb1fbf) is 8M, max 584.8M, 576.8M free. Sep 4 23:47:51.380543 systemd-journald[1147]: Received client request to flush runtime journal. Sep 4 23:47:51.380581 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:47:51.380602 kernel: loop1: detected capacity change from 0 to 224512 Sep 4 23:47:51.318573 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:47:51.328585 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:47:51.343732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:47:51.352176 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:47:51.355075 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 4 23:47:51.355092 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 4 23:47:51.361682 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:47:51.377556 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:47:51.383651 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:47:51.402194 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:47:51.421318 kernel: loop2: detected capacity change from 0 to 147912 Sep 4 23:47:51.433215 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:47:51.440771 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:47:51.460798 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 4 23:47:51.460816 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 4 23:47:51.464610 kernel: loop3: detected capacity change from 0 to 138176 Sep 4 23:47:51.467927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:47:51.514442 kernel: loop4: detected capacity change from 0 to 8 Sep 4 23:47:51.514927 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 23:47:51.536289 kernel: loop6: detected capacity change from 0 to 147912 Sep 4 23:47:51.558115 kernel: loop7: detected capacity change from 0 to 138176 Sep 4 23:47:51.573753 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 4 23:47:51.575213 (sd-merge)[1215]: Merged extensions into '/usr'. Sep 4 23:47:51.581162 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:47:51.581277 systemd[1]: Reloading... Sep 4 23:47:51.642279 zram_generator::config[1239]: No configuration found. Sep 4 23:47:51.761923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:47:51.810562 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:47:51.817073 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:47:51.817372 systemd[1]: Reloading finished in 235 ms. Sep 4 23:47:51.846795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:47:51.848627 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:47:51.865432 systemd[1]: Starting ensure-sysext.service... Sep 4 23:47:51.869469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:47:51.891213 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:47:51.891689 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:47:51.892313 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:47:51.892573 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 4 23:47:51.892665 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 4 23:47:51.895080 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:47:51.895098 systemd[1]: Reloading... Sep 4 23:47:51.895411 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:47:51.895415 systemd-tmpfiles[1287]: Skipping /boot Sep 4 23:47:51.902426 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:47:51.902517 systemd-tmpfiles[1287]: Skipping /boot Sep 4 23:47:51.952275 zram_generator::config[1313]: No configuration found. Sep 4 23:47:52.035366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:47:52.088139 systemd[1]: Reloading finished in 192 ms. Sep 4 23:47:52.104384 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:47:52.111793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:47:52.122482 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:47:52.126305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:47:52.129470 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:47:52.133615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:47:52.137451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:47:52.142471 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:47:52.146376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.146562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:47:52.153351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:47:52.157468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:47:52.160457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:47:52.162379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:47:52.162505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:47:52.165456 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:47:52.166089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.167123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:47:52.168317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:47:52.169866 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:47:52.181049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.182083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:47:52.185520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:47:52.186024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:47:52.186147 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:47:52.187974 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:47:52.188842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.190705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:47:52.190898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:47:52.192133 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:47:52.192274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:47:52.197820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:47:52.199288 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:47:52.203375 systemd-udevd[1369]: Using default interface naming scheme 'v255'. Sep 4 23:47:52.206341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.206606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:47:52.211471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:47:52.215081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:47:52.224644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:47:52.225185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:47:52.226058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:47:52.226171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.227792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:47:52.228190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:47:52.229958 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:47:52.231689 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:47:52.231923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:47:52.238105 augenrules[1403]: No rules Sep 4 23:47:52.237963 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:47:52.240247 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:47:52.240429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:47:52.241170 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:47:52.241669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:47:52.250045 systemd[1]: Finished ensure-sysext.service. Sep 4 23:47:52.250705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:47:52.250841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:47:52.258395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:47:52.258445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:47:52.265406 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:47:52.267157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:47:52.268405 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:47:52.268523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:47:52.278392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:47:52.317612 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:47:52.318145 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:47:52.337139 systemd-resolved[1368]: Positive Trust Anchors: Sep 4 23:47:52.337599 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:47:52.337689 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:47:52.345892 systemd-resolved[1368]: Using system hostname 'ci-4230-2-2-n-de0727ed16'. Sep 4 23:47:52.347149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:47:52.347936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:47:52.370877 systemd-networkd[1423]: lo: Link UP Sep 4 23:47:52.370884 systemd-networkd[1423]: lo: Gained carrier Sep 4 23:47:52.372931 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:47:52.384371 systemd-networkd[1423]: Enumeration completed Sep 4 23:47:52.384465 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:47:52.385030 systemd[1]: Reached target network.target - Network. Sep 4 23:47:52.391566 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:47:52.399425 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:47:52.416971 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:47:52.418248 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:52.419285 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:47:52.420542 systemd-networkd[1423]: eth0: Link UP Sep 4 23:47:52.420549 systemd-networkd[1423]: eth0: Gained carrier Sep 4 23:47:52.420565 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:52.437125 systemd-networkd[1423]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:52.437132 systemd-networkd[1423]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:47:52.438393 systemd-networkd[1423]: eth1: Link UP Sep 4 23:47:52.438399 systemd-networkd[1423]: eth1: Gained carrier Sep 4 23:47:52.438410 systemd-networkd[1423]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:47:52.448278 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1431) Sep 4 23:47:52.448327 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 23:47:52.457393 systemd-networkd[1423]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 4 23:47:52.457916 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:47:52.471430 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:47:52.478356 systemd-networkd[1423]: eth0: DHCPv4 address 46.62.204.39/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 4 23:47:52.479565 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:47:52.481559 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:47:52.495293 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:47:52.498022 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 4 23:47:52.507427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:47:52.510402 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 4 23:47:52.510473 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.510559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:47:52.512558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:47:52.514914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:47:52.518401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:47:52.518899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:47:52.518929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:47:52.518949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:47:52.518959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:47:52.535063 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:47:52.535292 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 23:47:52.541879 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 4 23:47:52.542031 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 23:47:52.542974 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:47:52.544238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:47:52.547331 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 23:47:52.548679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:47:52.548806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:47:52.549981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:47:52.550167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:47:52.552933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:47:52.553383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:47:52.569273 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:47:52.578419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:52.588730 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 4 23:47:52.588782 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 4 23:47:52.593288 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:47:52.593332 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 23:47:52.593344 kernel: [drm] features: -context_init Sep 4 23:47:52.594676 kernel: [drm] number of scanouts: 1 Sep 4 23:47:52.595279 kernel: [drm] number of cap sets: 0 Sep 4 23:47:52.597293 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 4 23:47:52.601802 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 23:47:52.601838 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 23:47:52.608286 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 23:47:52.622654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:47:52.623974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:52.626573 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:47:52.634474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:52.638138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:47:52.638341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:52.641907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:47:52.684648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:47:52.758210 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:47:52.764424 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:47:52.773283 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:47:52.800426 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:47:52.801147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:47:52.801296 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:47:52.801506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:47:52.801599 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:47:52.801828 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:47:52.801977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:47:52.802049 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:47:52.802135 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:47:52.802168 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:47:52.802227 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:47:52.804330 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:47:52.805698 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:47:52.808135 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:47:52.810915 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:47:52.811047 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:47:52.813450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:47:52.815061 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:47:52.817080 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:47:52.820519 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:47:52.822518 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:47:52.824033 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:47:52.825689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:47:52.825841 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:47:52.826338 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:47:52.833421 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:47:52.844573 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:47:52.854482 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:47:52.859991 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:47:52.869442 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:47:52.870869 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:47:52.876276 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:47:52.881455 jq[1492]: false Sep 4 23:47:52.878788 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:47:52.885428 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 4 23:47:52.890024 dbus-daemon[1491]: [system] SELinux support is enabled Sep 4 23:47:52.893414 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:47:52.897155 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:47:52.901835 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:47:52.906012 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:47:52.906457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:47:52.907393 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:47:52.912306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:47:52.913206 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:47:52.919437 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:47:52.926602 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:47:52.928680 extend-filesystems[1495]: Found loop4 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found loop5 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found loop6 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found loop7 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda1 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda2 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda3 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found usr Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda4 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda6 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda7 Sep 4 23:47:52.928680 extend-filesystems[1495]: Found sda9 Sep 4 23:47:52.928680 extend-filesystems[1495]: Checking size of /dev/sda9 Sep 4 23:47:52.926759 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:47:52.960467 coreos-metadata[1490]: Sep 04 23:47:52.949 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 4 23:47:52.960467 coreos-metadata[1490]: Sep 04 23:47:52.949 INFO Fetch successful Sep 4 23:47:52.960467 coreos-metadata[1490]: Sep 04 23:47:52.949 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 4 23:47:52.960467 coreos-metadata[1490]: Sep 04 23:47:52.949 INFO Fetch successful Sep 4 23:47:52.934454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:47:52.934635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:47:52.945767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:47:52.945817 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:47:52.963947 jq[1506]: true Sep 4 23:47:52.952127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:47:52.952183 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:47:52.970815 extend-filesystems[1495]: Resized partition /dev/sda9 Sep 4 23:47:52.974571 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:47:52.974754 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:47:52.975811 jq[1524]: true Sep 4 23:47:52.981203 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:47:52.991235 tar[1509]: linux-amd64/LICENSE Sep 4 23:47:52.991235 tar[1509]: linux-amd64/helm Sep 4 23:47:52.996048 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:47:52.997289 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 4 23:47:53.008241 update_engine[1504]: I20250904 23:47:53.007941 1504 main.cc:92] Flatcar Update Engine starting Sep 4 23:47:53.013462 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:47:53.018654 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:47:53.023241 update_engine[1504]: I20250904 23:47:53.020707 1504 update_check_scheduler.cc:74] Next update check in 6m44s Sep 4 23:47:53.071104 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:47:53.073924 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:47:53.112974 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1434) Sep 4 23:47:53.118625 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:47:53.122647 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:47:53.131406 systemd[1]: Starting sshkeys.service... Sep 4 23:47:53.152282 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:47:53.159424 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:47:53.172997 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 4 23:47:53.170032 systemd-logind[1502]: New seat seat0. Sep 4 23:47:53.201043 systemd-logind[1502]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 23:47:53.201061 systemd-logind[1502]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:47:53.208432 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 23:47:53.208432 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 4 23:47:53.208432 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 4 23:47:53.213478 extend-filesystems[1495]: Resized filesystem in /dev/sda9 Sep 4 23:47:53.213478 extend-filesystems[1495]: Found sr0 Sep 4 23:47:53.229116 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:47:53.231673 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:47:53.231846 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:47:53.251890 coreos-metadata[1567]: Sep 04 23:47:53.250 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 4 23:47:53.253878 coreos-metadata[1567]: Sep 04 23:47:53.253 INFO Fetch successful Sep 4 23:47:53.256960 unknown[1567]: wrote ssh authorized keys file for user: core Sep 4 23:47:53.294054 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:47:53.297519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:47:53.309597 systemd[1]: Finished sshkeys.service. Sep 4 23:47:53.351240 containerd[1526]: time="2025-09-04T23:47:53.351132515Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:47:53.392056 containerd[1526]: time="2025-09-04T23:47:53.391971702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395398938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395421762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395434746Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395565862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395580750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395628419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395638828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395785884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395797636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395807815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396275 containerd[1526]: time="2025-09-04T23:47:53.395819337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.395891702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.396039911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.396127034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.396137223Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.396193168Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:47:53.396466 containerd[1526]: time="2025-09-04T23:47:53.396229125Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:47:53.401794 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402020615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402058687Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402072643Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402085777Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402098251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:47:53.402276 containerd[1526]: time="2025-09-04T23:47:53.402184312Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402565847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402650696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402664102Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402677116Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402687445Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402696933Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402705419Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402715428Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402725146Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402735195Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.402775 containerd[1526]: time="2025-09-04T23:47:53.402748981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.402757056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403142659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403157276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403166885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403176462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403185600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403195248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403205096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403224442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403234461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403246464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403274566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403285847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403294534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403371 containerd[1526]: time="2025-09-04T23:47:53.403305394Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:47:53.403609 containerd[1526]: time="2025-09-04T23:47:53.403321244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403609 containerd[1526]: time="2025-09-04T23:47:53.403330702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.403609 containerd[1526]: time="2025-09-04T23:47:53.403339087Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403856929Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403879470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403888478Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403946216Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403957136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403967365Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403975210Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:47:53.404279 containerd[1526]: time="2025-09-04T23:47:53.403983586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:47:53.404414 containerd[1526]: time="2025-09-04T23:47:53.404187809Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:47:53.404414 containerd[1526]: time="2025-09-04T23:47:53.404223656Z" level=info msg="Connect containerd service" Sep 4 23:47:53.404414 containerd[1526]: time="2025-09-04T23:47:53.404243273Z" level=info msg="using legacy CRI server" Sep 4 23:47:53.404414 containerd[1526]: time="2025-09-04T23:47:53.404248022Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:47:53.404793 containerd[1526]: time="2025-09-04T23:47:53.404779058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405546356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405733657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405766279Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405792358Z" level=info msg="Start subscribing containerd event" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405818877Z" level=info msg="Start recovering state" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405860475Z" level=info msg="Start event monitor" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405868119Z" level=info msg="Start snapshots syncer" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405874552Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:47:53.407818 containerd[1526]: time="2025-09-04T23:47:53.405880733Z" level=info msg="Start streaming server" Sep 4 23:47:53.405983 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:47:53.410295 containerd[1526]: time="2025-09-04T23:47:53.410246890Z" level=info msg="containerd successfully booted in 0.059679s" Sep 4 23:47:53.416974 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:47:53.431547 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:47:53.442952 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:47:53.450336 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:47:53.450630 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:47:53.461886 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:47:53.470621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:47:53.482834 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:47:53.487525 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:47:53.487935 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:47:53.641646 tar[1509]: linux-amd64/README.md Sep 4 23:47:53.653380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:47:53.748591 systemd-networkd[1423]: eth0: Gained IPv6LL Sep 4 23:47:53.749438 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:47:53.750967 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:47:53.753546 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:47:53.765616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:53.769578 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:47:53.796611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:47:53.940415 systemd-networkd[1423]: eth1: Gained IPv6LL Sep 4 23:47:53.941179 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:47:54.921910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:54.926048 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:47:54.932839 systemd[1]: Startup finished in 1.226s (kernel) + 5.768s (initrd) + 4.484s (userspace) = 11.479s. Sep 4 23:47:54.934787 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:55.665395 kubelet[1621]: E0904 23:47:55.665332 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:55.667724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:55.667862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:55.668157 systemd[1]: kubelet.service: Consumed 1.149s CPU time, 265.5M memory peak. Sep 4 23:48:03.992248 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:48:04.000590 systemd[1]: Started sshd@0-46.62.204.39:22-139.178.68.195:53430.service - OpenSSH per-connection server daemon (139.178.68.195:53430). Sep 4 23:48:04.993058 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 53430 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:04.995162 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:05.008226 systemd-logind[1502]: New session 1 of user core. Sep 4 23:48:05.009551 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:48:05.021932 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:48:05.034033 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:48:05.040849 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:48:05.045208 (systemd)[1637]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:48:05.047908 systemd-logind[1502]: New session c1 of user core. Sep 4 23:48:05.182312 systemd[1637]: Queued start job for default target default.target. Sep 4 23:48:05.189028 systemd[1637]: Created slice app.slice - User Application Slice. Sep 4 23:48:05.189052 systemd[1637]: Reached target paths.target - Paths. Sep 4 23:48:05.189084 systemd[1637]: Reached target timers.target - Timers. Sep 4 23:48:05.190135 systemd[1637]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:48:05.200294 systemd[1637]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:48:05.200418 systemd[1637]: Reached target sockets.target - Sockets. Sep 4 23:48:05.200451 systemd[1637]: Reached target basic.target - Basic System. Sep 4 23:48:05.200477 systemd[1637]: Reached target default.target - Main User Target. Sep 4 23:48:05.200494 systemd[1637]: Startup finished in 144ms. Sep 4 23:48:05.200839 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:48:05.202817 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:48:05.918332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:48:05.923610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:05.933375 systemd[1]: Started sshd@1-46.62.204.39:22-139.178.68.195:53432.service - OpenSSH per-connection server daemon (139.178.68.195:53432). Sep 4 23:48:06.019961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:06.024640 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:06.073898 kubelet[1658]: E0904 23:48:06.073713 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:06.077526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:06.077693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:06.078015 systemd[1]: kubelet.service: Consumed 138ms CPU time, 113.4M memory peak. Sep 4 23:48:07.040509 sshd[1651]: Accepted publickey for core from 139.178.68.195 port 53432 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:07.041894 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:07.046771 systemd-logind[1502]: New session 2 of user core. Sep 4 23:48:07.056392 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:48:07.800958 sshd[1665]: Connection closed by 139.178.68.195 port 53432 Sep 4 23:48:07.801522 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:07.804092 systemd[1]: sshd@1-46.62.204.39:22-139.178.68.195:53432.service: Deactivated successfully. Sep 4 23:48:07.805632 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:48:07.806765 systemd-logind[1502]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:48:07.807794 systemd-logind[1502]: Removed session 2. Sep 4 23:48:07.955474 systemd[1]: Started sshd@2-46.62.204.39:22-139.178.68.195:53444.service - OpenSSH per-connection server daemon (139.178.68.195:53444). Sep 4 23:48:08.942334 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 53444 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:08.944573 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:08.950004 systemd-logind[1502]: New session 3 of user core. Sep 4 23:48:08.959428 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:48:09.622146 sshd[1673]: Connection closed by 139.178.68.195 port 53444 Sep 4 23:48:09.622949 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:09.625745 systemd[1]: sshd@2-46.62.204.39:22-139.178.68.195:53444.service: Deactivated successfully. Sep 4 23:48:09.627484 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:48:09.628776 systemd-logind[1502]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:48:09.629960 systemd-logind[1502]: Removed session 3. Sep 4 23:48:09.799546 systemd[1]: Started sshd@3-46.62.204.39:22-139.178.68.195:48086.service - OpenSSH per-connection server daemon (139.178.68.195:48086). Sep 4 23:48:10.786359 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 48086 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:10.787635 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:10.792288 systemd-logind[1502]: New session 4 of user core. Sep 4 23:48:10.801428 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:48:11.473490 sshd[1681]: Connection closed by 139.178.68.195 port 48086 Sep 4 23:48:11.474125 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:11.477607 systemd[1]: sshd@3-46.62.204.39:22-139.178.68.195:48086.service: Deactivated successfully. Sep 4 23:48:11.479809 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:48:11.481518 systemd-logind[1502]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:48:11.483183 systemd-logind[1502]: Removed session 4. Sep 4 23:48:11.651485 systemd[1]: Started sshd@4-46.62.204.39:22-139.178.68.195:48102.service - OpenSSH per-connection server daemon (139.178.68.195:48102). Sep 4 23:48:12.640991 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 48102 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:12.643079 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:12.648392 systemd-logind[1502]: New session 5 of user core. Sep 4 23:48:12.658555 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:48:13.183012 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:48:13.183479 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:48:13.206511 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 4 23:48:13.367207 sshd[1689]: Connection closed by 139.178.68.195 port 48102 Sep 4 23:48:13.368187 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:13.371997 systemd-logind[1502]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:48:13.372832 systemd[1]: sshd@4-46.62.204.39:22-139.178.68.195:48102.service: Deactivated successfully. Sep 4 23:48:13.374719 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:48:13.375849 systemd-logind[1502]: Removed session 5. Sep 4 23:48:13.579535 systemd[1]: Started sshd@5-46.62.204.39:22-139.178.68.195:48112.service - OpenSSH per-connection server daemon (139.178.68.195:48112). Sep 4 23:48:14.679000 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 48112 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:14.680538 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:14.685802 systemd-logind[1502]: New session 6 of user core. Sep 4 23:48:14.700460 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:48:15.259101 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:48:15.259390 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:48:15.262707 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 4 23:48:15.267107 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:48:15.267372 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:48:15.287751 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:48:15.325776 augenrules[1722]: No rules Sep 4 23:48:15.327807 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:48:15.328126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:48:15.330544 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 4 23:48:15.509623 sshd[1698]: Connection closed by 139.178.68.195 port 48112 Sep 4 23:48:15.510666 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:15.513377 systemd[1]: sshd@5-46.62.204.39:22-139.178.68.195:48112.service: Deactivated successfully. Sep 4 23:48:15.514793 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:48:15.516101 systemd-logind[1502]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:48:15.517543 systemd-logind[1502]: Removed session 6. Sep 4 23:48:15.669704 systemd[1]: Started sshd@6-46.62.204.39:22-139.178.68.195:48124.service - OpenSSH per-connection server daemon (139.178.68.195:48124). Sep 4 23:48:16.328423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:48:16.333637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:16.429040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:16.431741 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:16.462958 kubelet[1741]: E0904 23:48:16.462902 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:16.465826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:16.465947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:16.466384 systemd[1]: kubelet.service: Consumed 125ms CPU time, 112.5M memory peak. Sep 4 23:48:16.667085 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 48124 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:48:16.668198 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:16.673972 systemd-logind[1502]: New session 7 of user core. Sep 4 23:48:16.683440 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:48:17.191925 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:48:17.192242 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:48:17.491598 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:48:17.491651 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:48:17.741199 dockerd[1766]: time="2025-09-04T23:48:17.741119644Z" level=info msg="Starting up" Sep 4 23:48:17.833335 dockerd[1766]: time="2025-09-04T23:48:17.833233427Z" level=info msg="Loading containers: start." Sep 4 23:48:17.978310 kernel: Initializing XFRM netlink socket Sep 4 23:48:18.003801 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 4 23:48:18.061081 systemd-networkd[1423]: docker0: Link UP Sep 4 23:48:18.080747 dockerd[1766]: time="2025-09-04T23:48:18.080688921Z" level=info msg="Loading containers: done." Sep 4 23:48:18.094475 dockerd[1766]: time="2025-09-04T23:48:18.094358312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:48:18.094658 dockerd[1766]: time="2025-09-04T23:48:18.094471074Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:48:18.094658 dockerd[1766]: time="2025-09-04T23:48:18.094605055Z" level=info msg="Daemon has completed initialization" Sep 4 23:48:18.126993 dockerd[1766]: time="2025-09-04T23:48:18.126944124Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:48:18.127072 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:48:19.043598 systemd-timesyncd[1417]: Contacted time server 185.252.140.126:123 (2.flatcar.pool.ntp.org). Sep 4 23:48:19.043657 systemd-timesyncd[1417]: Initial clock synchronization to Thu 2025-09-04 23:48:19.043052 UTC. Sep 4 23:48:19.043696 systemd-resolved[1368]: Clock change detected. Flushing caches. Sep 4 23:48:20.018770 containerd[1526]: time="2025-09-04T23:48:20.018723129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:48:20.593206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606337867.mount: Deactivated successfully. Sep 4 23:48:21.533778 containerd[1526]: time="2025-09-04T23:48:21.533725402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:21.534926 containerd[1526]: time="2025-09-04T23:48:21.534898883Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800781" Sep 4 23:48:21.535284 containerd[1526]: time="2025-09-04T23:48:21.535246735Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:21.538334 containerd[1526]: time="2025-09-04T23:48:21.538301553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:21.539509 containerd[1526]: time="2025-09-04T23:48:21.539362713Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.520600571s" Sep 4 23:48:21.539509 containerd[1526]: time="2025-09-04T23:48:21.539410583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:48:21.540596 containerd[1526]: time="2025-09-04T23:48:21.540566951Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:48:22.651199 containerd[1526]: time="2025-09-04T23:48:22.651134057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:22.652239 containerd[1526]: time="2025-09-04T23:48:22.652119125Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784150" Sep 4 23:48:22.653027 containerd[1526]: time="2025-09-04T23:48:22.652932911Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:22.655876 containerd[1526]: time="2025-09-04T23:48:22.655056883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:22.655876 containerd[1526]: time="2025-09-04T23:48:22.655766374Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.115169887s" Sep 4 23:48:22.655876 containerd[1526]: time="2025-09-04T23:48:22.655787152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:48:22.656296 containerd[1526]: time="2025-09-04T23:48:22.656200968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:48:23.630873 containerd[1526]: time="2025-09-04T23:48:23.630809629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.631993 containerd[1526]: time="2025-09-04T23:48:23.631952953Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175058" Sep 4 23:48:23.632359 containerd[1526]: time="2025-09-04T23:48:23.632306526Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.634751 containerd[1526]: time="2025-09-04T23:48:23.634711215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.636517 containerd[1526]: time="2025-09-04T23:48:23.635811729Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 979.438287ms" Sep 4 23:48:23.636517 containerd[1526]: time="2025-09-04T23:48:23.635842907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:48:23.636850 containerd[1526]: time="2025-09-04T23:48:23.636815571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:48:24.612066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721716421.mount: Deactivated successfully. Sep 4 23:48:24.930804 containerd[1526]: time="2025-09-04T23:48:24.930751072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:24.931660 containerd[1526]: time="2025-09-04T23:48:24.931618098Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897198" Sep 4 23:48:24.932388 containerd[1526]: time="2025-09-04T23:48:24.932340071Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:24.934830 containerd[1526]: time="2025-09-04T23:48:24.933988392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:24.934830 containerd[1526]: time="2025-09-04T23:48:24.934518917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.297674431s" Sep 4 23:48:24.934830 containerd[1526]: time="2025-09-04T23:48:24.934539946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:48:24.935050 containerd[1526]: time="2025-09-04T23:48:24.935036407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:48:25.420939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262542262.mount: Deactivated successfully. Sep 4 23:48:26.335138 containerd[1526]: time="2025-09-04T23:48:26.335084861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.336168 containerd[1526]: time="2025-09-04T23:48:26.336094173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 4 23:48:26.337028 containerd[1526]: time="2025-09-04T23:48:26.336910484Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.339243 containerd[1526]: time="2025-09-04T23:48:26.339201439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.340853 containerd[1526]: time="2025-09-04T23:48:26.340172530Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.405063907s" Sep 4 23:48:26.340853 containerd[1526]: time="2025-09-04T23:48:26.340202877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:48:26.341147 containerd[1526]: time="2025-09-04T23:48:26.341107514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:48:26.774480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844353468.mount: Deactivated successfully. Sep 4 23:48:26.779244 containerd[1526]: time="2025-09-04T23:48:26.779207436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.780255 containerd[1526]: time="2025-09-04T23:48:26.780175180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 4 23:48:26.781033 containerd[1526]: time="2025-09-04T23:48:26.780659909Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.782733 containerd[1526]: time="2025-09-04T23:48:26.782689434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:26.784484 containerd[1526]: time="2025-09-04T23:48:26.783681244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.544837ms" Sep 4 23:48:26.784484 containerd[1526]: time="2025-09-04T23:48:26.783709317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:48:26.784484 containerd[1526]: time="2025-09-04T23:48:26.784382509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:48:27.344351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:48:27.351164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:27.357401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295751914.mount: Deactivated successfully. Sep 4 23:48:27.490177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:27.490596 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:27.554246 kubelet[2099]: E0904 23:48:27.554159 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:27.556751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:27.556922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:27.557467 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.4M memory peak. Sep 4 23:48:28.907186 containerd[1526]: time="2025-09-04T23:48:28.907122774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:28.908717 containerd[1526]: time="2025-09-04T23:48:28.908664213Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Sep 4 23:48:28.909094 containerd[1526]: time="2025-09-04T23:48:28.909047322Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:28.913030 containerd[1526]: time="2025-09-04T23:48:28.912505647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:28.914737 containerd[1526]: time="2025-09-04T23:48:28.913701459Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.129296467s" Sep 4 23:48:28.914737 containerd[1526]: time="2025-09-04T23:48:28.913736215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:48:31.451266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.451499 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.4M memory peak. Sep 4 23:48:31.467600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:31.508651 systemd[1]: Reload requested from client PID 2176 ('systemctl') (unit session-7.scope)... Sep 4 23:48:31.508668 systemd[1]: Reloading... Sep 4 23:48:31.629051 zram_generator::config[2219]: No configuration found. Sep 4 23:48:31.748801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:48:31.844610 systemd[1]: Reloading finished in 335 ms. Sep 4 23:48:31.887782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.892434 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:48:31.894042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:31.894359 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:48:31.894755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.894801 systemd[1]: kubelet.service: Consumed 112ms CPU time, 99.4M memory peak. Sep 4 23:48:31.901463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:31.991655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:32.000733 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:48:32.036964 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:32.036964 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:48:32.036964 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:32.037393 kubelet[2278]: I0904 23:48:32.037021 2278 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:48:32.217356 kubelet[2278]: I0904 23:48:32.217312 2278 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:48:32.217356 kubelet[2278]: I0904 23:48:32.217342 2278 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:48:32.217592 kubelet[2278]: I0904 23:48:32.217579 2278 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:48:32.269988 kubelet[2278]: E0904 23:48:32.268764 2278 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.62.204.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:32.269988 kubelet[2278]: I0904 23:48:32.269898 2278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:48:32.281412 kubelet[2278]: E0904 23:48:32.281362 2278 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:48:32.281412 kubelet[2278]: I0904 23:48:32.281412 2278 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:48:32.286301 kubelet[2278]: I0904 23:48:32.286271 2278 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:48:32.289575 kubelet[2278]: I0904 23:48:32.289524 2278 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:48:32.289760 kubelet[2278]: I0904 23:48:32.289558 2278 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-n-de0727ed16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:48:32.291949 kubelet[2278]: I0904 23:48:32.291933 2278 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:48:32.291995 kubelet[2278]: I0904 23:48:32.291953 2278 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:48:32.293127 kubelet[2278]: I0904 23:48:32.293104 2278 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:32.296968 kubelet[2278]: I0904 23:48:32.296942 2278 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:48:32.296968 kubelet[2278]: I0904 23:48:32.296969 2278 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:48:32.297100 kubelet[2278]: I0904 23:48:32.296985 2278 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:48:32.297100 kubelet[2278]: I0904 23:48:32.296995 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:48:32.308535 kubelet[2278]: I0904 23:48:32.307940 2278 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:48:32.311995 kubelet[2278]: I0904 23:48:32.311969 2278 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:48:32.312601 kubelet[2278]: W0904 23:48:32.312570 2278 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:48:32.313074 kubelet[2278]: I0904 23:48:32.313049 2278 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:48:32.313116 kubelet[2278]: I0904 23:48:32.313083 2278 server.go:1287] "Started kubelet" Sep 4 23:48:32.313233 kubelet[2278]: W0904 23:48:32.313193 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.204.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:32.313269 kubelet[2278]: E0904 23:48:32.313239 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.204.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:32.313328 kubelet[2278]: W0904 23:48:32.313295 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.204.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-de0727ed16&limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:32.314643 kubelet[2278]: E0904 23:48:32.313327 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.204.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-de0727ed16&limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:32.315754 kubelet[2278]: I0904 23:48:32.315742 2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:48:32.315924 kubelet[2278]: I0904 23:48:32.315891 2278 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:48:32.317470 kubelet[2278]: I0904 23:48:32.316725 2278 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:48:32.321525 kubelet[2278]: I0904 23:48:32.321509 2278 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:48:32.321658 kubelet[2278]: I0904 23:48:32.321607 2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:48:32.322189 kubelet[2278]: I0904 23:48:32.321806 2278 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:48:32.323576 kubelet[2278]: I0904 23:48:32.323563 2278 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:48:32.324319 kubelet[2278]: E0904 23:48:32.324300 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:32.332027 kubelet[2278]: I0904 23:48:32.331996 2278 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:48:32.332178 kubelet[2278]: I0904 23:48:32.332161 2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:48:32.332448 kubelet[2278]: I0904 23:48:32.332423 2278 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:48:32.332532 kubelet[2278]: I0904 23:48:32.332513 2278 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:48:32.332674 kubelet[2278]: E0904 23:48:32.332640 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.204.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-de0727ed16?timeout=10s\": dial tcp 46.62.204.39:6443: connect: connection refused" interval="200ms" Sep 4 23:48:32.333119 kubelet[2278]: W0904 23:48:32.333085 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.204.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:32.333160 kubelet[2278]: E0904 23:48:32.333130 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.204.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:32.335398 kubelet[2278]: E0904 23:48:32.333171 2278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.204.39:6443/api/v1/namespaces/default/events\": dial tcp 46.62.204.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-2-n-de0727ed16.18623936d63a17e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-n-de0727ed16,UID:ci-4230-2-2-n-de0727ed16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-n-de0727ed16,},FirstTimestamp:2025-09-04 23:48:32.313063399 +0000 UTC m=+0.309301613,LastTimestamp:2025-09-04 23:48:32.313063399 +0000 UTC m=+0.309301613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-n-de0727ed16,}" Sep 4 23:48:32.339580 kubelet[2278]: I0904 23:48:32.338908 2278 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:48:32.346102 kubelet[2278]: I0904 23:48:32.346057 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:48:32.347212 kubelet[2278]: I0904 23:48:32.347180 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:48:32.347212 kubelet[2278]: I0904 23:48:32.347202 2278 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:48:32.347280 kubelet[2278]: I0904 23:48:32.347217 2278 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:48:32.347280 kubelet[2278]: I0904 23:48:32.347223 2278 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:48:32.347280 kubelet[2278]: E0904 23:48:32.347260 2278 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:48:32.347474 kubelet[2278]: E0904 23:48:32.347450 2278 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:48:32.353961 kubelet[2278]: W0904 23:48:32.353830 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.204.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:32.353961 kubelet[2278]: E0904 23:48:32.353868 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.204.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:32.370667 kubelet[2278]: I0904 23:48:32.370616 2278 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:48:32.370667 kubelet[2278]: I0904 23:48:32.370642 2278 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:48:32.370667 kubelet[2278]: I0904 23:48:32.370655 2278 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:32.373064 kubelet[2278]: I0904 23:48:32.372841 2278 policy_none.go:49] "None policy: Start" Sep 4 23:48:32.373064 kubelet[2278]: I0904 23:48:32.372856 2278 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:48:32.373064 kubelet[2278]: I0904 23:48:32.372865 2278 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:48:32.378157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:48:32.387675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:48:32.391028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:48:32.401265 kubelet[2278]: I0904 23:48:32.400758 2278 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:48:32.401265 kubelet[2278]: I0904 23:48:32.400910 2278 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:48:32.401265 kubelet[2278]: I0904 23:48:32.400922 2278 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:48:32.401265 kubelet[2278]: I0904 23:48:32.401128 2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:48:32.403936 kubelet[2278]: E0904 23:48:32.403903 2278 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:48:32.403992 kubelet[2278]: E0904 23:48:32.403976 2278 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:32.467240 systemd[1]: Created slice kubepods-burstable-pod8cbee3914a56c0915a68f1dac9656bc2.slice - libcontainer container kubepods-burstable-pod8cbee3914a56c0915a68f1dac9656bc2.slice. Sep 4 23:48:32.479909 kubelet[2278]: E0904 23:48:32.479885 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.483294 systemd[1]: Created slice kubepods-burstable-podeb2a8aab8e63fe65cbd547abb55213d2.slice - libcontainer container kubepods-burstable-podeb2a8aab8e63fe65cbd547abb55213d2.slice. Sep 4 23:48:32.485536 kubelet[2278]: E0904 23:48:32.485247 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.487511 systemd[1]: Created slice kubepods-burstable-podae75a67538596f2525ee56079db8fc3d.slice - libcontainer container kubepods-burstable-podae75a67538596f2525ee56079db8fc3d.slice. Sep 4 23:48:32.489235 kubelet[2278]: E0904 23:48:32.489212 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.502802 kubelet[2278]: I0904 23:48:32.502756 2278 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.503174 kubelet[2278]: E0904 23:48:32.503145 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.204.39:6443/api/v1/nodes\": dial tcp 46.62.204.39:6443: connect: connection refused" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.533344 kubelet[2278]: E0904 23:48:32.533230 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.204.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-de0727ed16?timeout=10s\": dial tcp 46.62.204.39:6443: connect: connection refused" interval="400ms" Sep 4 23:48:32.633866 kubelet[2278]: I0904 23:48:32.633816 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.633866 kubelet[2278]: I0904 23:48:32.633850 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.633866 kubelet[2278]: I0904 23:48:32.633870 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae75a67538596f2525ee56079db8fc3d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-n-de0727ed16\" (UID: \"ae75a67538596f2525ee56079db8fc3d\") " pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634066 kubelet[2278]: I0904 23:48:32.633888 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634066 kubelet[2278]: I0904 23:48:32.633905 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634066 kubelet[2278]: I0904 23:48:32.633921 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634066 kubelet[2278]: I0904 23:48:32.633937 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634066 kubelet[2278]: I0904 23:48:32.633957 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.634276 kubelet[2278]: I0904 23:48:32.633974 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.705600 kubelet[2278]: I0904 23:48:32.705568 2278 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.705887 kubelet[2278]: E0904 23:48:32.705848 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.204.39:6443/api/v1/nodes\": dial tcp 46.62.204.39:6443: connect: connection refused" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:32.782917 containerd[1526]: time="2025-09-04T23:48:32.782859976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-n-de0727ed16,Uid:8cbee3914a56c0915a68f1dac9656bc2,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:32.786267 containerd[1526]: time="2025-09-04T23:48:32.786181283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-n-de0727ed16,Uid:eb2a8aab8e63fe65cbd547abb55213d2,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:32.790460 containerd[1526]: time="2025-09-04T23:48:32.790414080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-n-de0727ed16,Uid:ae75a67538596f2525ee56079db8fc3d,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:32.934740 kubelet[2278]: E0904 23:48:32.934678 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.204.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-de0727ed16?timeout=10s\": dial tcp 46.62.204.39:6443: connect: connection refused" interval="800ms" Sep 4 23:48:33.108529 kubelet[2278]: I0904 23:48:33.108263 2278 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:33.109127 kubelet[2278]: E0904 23:48:33.108639 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.204.39:6443/api/v1/nodes\": dial tcp 46.62.204.39:6443: connect: connection refused" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:33.217448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021427548.mount: Deactivated successfully. Sep 4 23:48:33.224505 containerd[1526]: time="2025-09-04T23:48:33.223502024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:33.225500 containerd[1526]: time="2025-09-04T23:48:33.225453663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:33.227458 containerd[1526]: time="2025-09-04T23:48:33.227406706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 4 23:48:33.228165 containerd[1526]: time="2025-09-04T23:48:33.228096549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:48:33.230160 containerd[1526]: time="2025-09-04T23:48:33.230119502Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:33.231966 containerd[1526]: time="2025-09-04T23:48:33.231818579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:48:33.231966 containerd[1526]: time="2025-09-04T23:48:33.231900332Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:33.234958 containerd[1526]: time="2025-09-04T23:48:33.234914594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:33.238028 containerd[1526]: time="2025-09-04T23:48:33.236600706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.633799ms" Sep 4 23:48:33.238566 containerd[1526]: time="2025-09-04T23:48:33.238534973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 448.05563ms" Sep 4 23:48:33.240997 containerd[1526]: time="2025-09-04T23:48:33.240964458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.714877ms" Sep 4 23:48:33.275696 kubelet[2278]: W0904 23:48:33.275303 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.204.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-de0727ed16&limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:33.275696 kubelet[2278]: E0904 23:48:33.275407 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.204.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-de0727ed16&limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:33.317408 kubelet[2278]: W0904 23:48:33.317302 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.204.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:33.317408 kubelet[2278]: E0904 23:48:33.317362 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.204.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:33.361112 containerd[1526]: time="2025-09-04T23:48:33.359547703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:33.361112 containerd[1526]: time="2025-09-04T23:48:33.359629096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:33.361112 containerd[1526]: time="2025-09-04T23:48:33.359648723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.361112 containerd[1526]: time="2025-09-04T23:48:33.360024548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.362713 containerd[1526]: time="2025-09-04T23:48:33.362485332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:33.365262 containerd[1526]: time="2025-09-04T23:48:33.365145360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:33.365262 containerd[1526]: time="2025-09-04T23:48:33.365176599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.367662 containerd[1526]: time="2025-09-04T23:48:33.365518810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.368575 containerd[1526]: time="2025-09-04T23:48:33.367952844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:33.368575 containerd[1526]: time="2025-09-04T23:48:33.368024859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:33.368575 containerd[1526]: time="2025-09-04T23:48:33.368048704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.368575 containerd[1526]: time="2025-09-04T23:48:33.368132501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:33.389476 systemd[1]: Started cri-containerd-aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e.scope - libcontainer container aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e. Sep 4 23:48:33.395693 systemd[1]: Started cri-containerd-6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6.scope - libcontainer container 6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6. Sep 4 23:48:33.402118 systemd[1]: Started cri-containerd-c1b4afe6de5ff48fe00039d1ab92e6b54356ca34812c98936c74b19b48a79a99.scope - libcontainer container c1b4afe6de5ff48fe00039d1ab92e6b54356ca34812c98936c74b19b48a79a99. Sep 4 23:48:33.448231 containerd[1526]: time="2025-09-04T23:48:33.448200437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-n-de0727ed16,Uid:8cbee3914a56c0915a68f1dac9656bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1b4afe6de5ff48fe00039d1ab92e6b54356ca34812c98936c74b19b48a79a99\"" Sep 4 23:48:33.457350 containerd[1526]: time="2025-09-04T23:48:33.456659608Z" level=info msg="CreateContainer within sandbox \"c1b4afe6de5ff48fe00039d1ab92e6b54356ca34812c98936c74b19b48a79a99\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:48:33.473278 containerd[1526]: time="2025-09-04T23:48:33.473204572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-n-de0727ed16,Uid:eb2a8aab8e63fe65cbd547abb55213d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6\"" Sep 4 23:48:33.479031 containerd[1526]: time="2025-09-04T23:48:33.478948142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-n-de0727ed16,Uid:ae75a67538596f2525ee56079db8fc3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e\"" Sep 4 23:48:33.483427 containerd[1526]: time="2025-09-04T23:48:33.483292047Z" level=info msg="CreateContainer within sandbox \"6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:48:33.486863 containerd[1526]: time="2025-09-04T23:48:33.486842865Z" level=info msg="CreateContainer within sandbox \"aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:48:33.490996 containerd[1526]: time="2025-09-04T23:48:33.490947742Z" level=info msg="CreateContainer within sandbox \"c1b4afe6de5ff48fe00039d1ab92e6b54356ca34812c98936c74b19b48a79a99\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7dc60a34abd2c6f0312049224fb01dfa0e91a45d17449b9b57b2cf48d380e253\"" Sep 4 23:48:33.491566 containerd[1526]: time="2025-09-04T23:48:33.491533831Z" level=info msg="StartContainer for \"7dc60a34abd2c6f0312049224fb01dfa0e91a45d17449b9b57b2cf48d380e253\"" Sep 4 23:48:33.504669 containerd[1526]: time="2025-09-04T23:48:33.504589211Z" level=info msg="CreateContainer within sandbox \"6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f\"" Sep 4 23:48:33.505584 containerd[1526]: time="2025-09-04T23:48:33.505450085Z" level=info msg="StartContainer for \"824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f\"" Sep 4 23:48:33.507580 containerd[1526]: time="2025-09-04T23:48:33.507501512Z" level=info msg="CreateContainer within sandbox \"aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030\"" Sep 4 23:48:33.508061 containerd[1526]: time="2025-09-04T23:48:33.507868220Z" level=info msg="StartContainer for \"896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030\"" Sep 4 23:48:33.523950 systemd[1]: Started cri-containerd-7dc60a34abd2c6f0312049224fb01dfa0e91a45d17449b9b57b2cf48d380e253.scope - libcontainer container 7dc60a34abd2c6f0312049224fb01dfa0e91a45d17449b9b57b2cf48d380e253. Sep 4 23:48:33.546132 systemd[1]: Started cri-containerd-824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f.scope - libcontainer container 824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f. Sep 4 23:48:33.560166 systemd[1]: Started cri-containerd-896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030.scope - libcontainer container 896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030. Sep 4 23:48:33.583851 containerd[1526]: time="2025-09-04T23:48:33.583806541Z" level=info msg="StartContainer for \"7dc60a34abd2c6f0312049224fb01dfa0e91a45d17449b9b57b2cf48d380e253\" returns successfully" Sep 4 23:48:33.612772 containerd[1526]: time="2025-09-04T23:48:33.612055922Z" level=info msg="StartContainer for \"824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f\" returns successfully" Sep 4 23:48:33.631210 kubelet[2278]: W0904 23:48:33.631108 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.204.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:33.631435 kubelet[2278]: E0904 23:48:33.631378 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.204.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:33.646220 containerd[1526]: time="2025-09-04T23:48:33.646165602Z" level=info msg="StartContainer for \"896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030\" returns successfully" Sep 4 23:48:33.735857 kubelet[2278]: E0904 23:48:33.735817 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.204.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-de0727ed16?timeout=10s\": dial tcp 46.62.204.39:6443: connect: connection refused" interval="1.6s" Sep 4 23:48:33.796306 kubelet[2278]: W0904 23:48:33.796241 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.204.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.204.39:6443: connect: connection refused Sep 4 23:48:33.796528 kubelet[2278]: E0904 23:48:33.796496 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.204.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.204.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:48:33.910432 kubelet[2278]: I0904 23:48:33.910381 2278 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:34.379125 kubelet[2278]: E0904 23:48:34.378851 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:34.381124 kubelet[2278]: E0904 23:48:34.381099 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:34.384601 kubelet[2278]: E0904 23:48:34.384566 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.161648 kubelet[2278]: I0904 23:48:35.160617 2278 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.161648 kubelet[2278]: E0904 23:48:35.160648 2278 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-2-n-de0727ed16\": node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.177325 kubelet[2278]: E0904 23:48:35.177275 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.277641 kubelet[2278]: E0904 23:48:35.277585 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.378612 kubelet[2278]: E0904 23:48:35.378528 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.387533 kubelet[2278]: E0904 23:48:35.387246 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.387533 kubelet[2278]: E0904 23:48:35.387297 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-de0727ed16\" not found" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.480045 kubelet[2278]: E0904 23:48:35.479635 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.580603 kubelet[2278]: E0904 23:48:35.580541 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.681420 kubelet[2278]: E0904 23:48:35.681358 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.782565 kubelet[2278]: E0904 23:48:35.782448 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.883303 kubelet[2278]: E0904 23:48:35.883248 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-de0727ed16\" not found" Sep 4 23:48:35.925155 kubelet[2278]: I0904 23:48:35.924952 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.930731 kubelet[2278]: E0904 23:48:35.930684 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.930731 kubelet[2278]: I0904 23:48:35.930713 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.932232 kubelet[2278]: E0904 23:48:35.932194 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.932232 kubelet[2278]: I0904 23:48:35.932220 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:35.933602 kubelet[2278]: E0904 23:48:35.933566 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-de0727ed16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:36.302350 kubelet[2278]: I0904 23:48:36.302304 2278 apiserver.go:52] "Watching apiserver" Sep 4 23:48:36.333132 kubelet[2278]: I0904 23:48:36.333066 2278 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:48:36.600921 kubelet[2278]: I0904 23:48:36.600656 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:36.953118 systemd[1]: Reload requested from client PID 2552 ('systemctl') (unit session-7.scope)... Sep 4 23:48:36.953146 systemd[1]: Reloading... Sep 4 23:48:37.059039 zram_generator::config[2594]: No configuration found. Sep 4 23:48:37.148763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:48:37.256350 systemd[1]: Reloading finished in 302 ms. Sep 4 23:48:37.282329 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:37.307167 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:48:37.307470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:37.307541 systemd[1]: kubelet.service: Consumed 623ms CPU time, 126.5M memory peak. Sep 4 23:48:37.314257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:37.415719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:37.418859 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:48:37.488302 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:37.488302 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:48:37.488302 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:37.488781 kubelet[2648]: I0904 23:48:37.488383 2648 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:48:37.494394 kubelet[2648]: I0904 23:48:37.494375 2648 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:48:37.496053 kubelet[2648]: I0904 23:48:37.494491 2648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:48:37.496053 kubelet[2648]: I0904 23:48:37.494907 2648 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:48:37.498728 kubelet[2648]: I0904 23:48:37.498713 2648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:48:37.500639 kubelet[2648]: I0904 23:48:37.500621 2648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:48:37.505711 kubelet[2648]: E0904 23:48:37.505673 2648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:48:37.505711 kubelet[2648]: I0904 23:48:37.505704 2648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:48:37.508763 kubelet[2648]: I0904 23:48:37.508684 2648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:48:37.511139 kubelet[2648]: I0904 23:48:37.508821 2648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:48:37.511139 kubelet[2648]: I0904 23:48:37.508841 2648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-n-de0727ed16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:48:37.511139 kubelet[2648]: I0904 23:48:37.508972 2648 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:48:37.511139 kubelet[2648]: I0904 23:48:37.508978 2648 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:48:37.511494 kubelet[2648]: I0904 23:48:37.509027 2648 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:37.511494 kubelet[2648]: I0904 23:48:37.509117 2648 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:48:37.511494 kubelet[2648]: I0904 23:48:37.509131 2648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:48:37.511494 kubelet[2648]: I0904 23:48:37.509144 2648 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:48:37.511494 kubelet[2648]: I0904 23:48:37.509153 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:48:37.512467 kubelet[2648]: I0904 23:48:37.512445 2648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:48:37.513235 kubelet[2648]: I0904 23:48:37.513221 2648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:48:37.520618 kubelet[2648]: I0904 23:48:37.520604 2648 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:48:37.520707 kubelet[2648]: I0904 23:48:37.520699 2648 server.go:1287] "Started kubelet" Sep 4 23:48:37.528966 kubelet[2648]: I0904 23:48:37.528946 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:48:37.539328 kubelet[2648]: I0904 23:48:37.539295 2648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:48:37.540205 kubelet[2648]: I0904 23:48:37.540185 2648 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:48:37.540470 kubelet[2648]: I0904 23:48:37.540459 2648 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:48:37.541414 kubelet[2648]: I0904 23:48:37.540970 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:48:37.541414 kubelet[2648]: I0904 23:48:37.541181 2648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:48:37.541845 kubelet[2648]: I0904 23:48:37.541776 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:48:37.542954 kubelet[2648]: I0904 23:48:37.542855 2648 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:48:37.543035 kubelet[2648]: I0904 23:48:37.543026 2648 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:48:37.544798 kubelet[2648]: I0904 23:48:37.544779 2648 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:48:37.544885 kubelet[2648]: I0904 23:48:37.544859 2648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:48:37.546168 kubelet[2648]: E0904 23:48:37.545998 2648 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:48:37.546463 kubelet[2648]: I0904 23:48:37.546449 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:48:37.547737 kubelet[2648]: I0904 23:48:37.547727 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:48:37.547831 kubelet[2648]: I0904 23:48:37.547823 2648 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:48:37.547990 kubelet[2648]: I0904 23:48:37.547959 2648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:48:37.548236 kubelet[2648]: I0904 23:48:37.547969 2648 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:48:37.548236 kubelet[2648]: E0904 23:48:37.548113 2648 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:48:37.551549 kubelet[2648]: I0904 23:48:37.551500 2648 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:48:37.588330 kubelet[2648]: I0904 23:48:37.588310 2648 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:48:37.588330 kubelet[2648]: I0904 23:48:37.588325 2648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:48:37.588330 kubelet[2648]: I0904 23:48:37.588340 2648 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:37.588516 kubelet[2648]: I0904 23:48:37.588461 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:48:37.588516 kubelet[2648]: I0904 23:48:37.588470 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:48:37.588516 kubelet[2648]: I0904 23:48:37.588484 2648 policy_none.go:49] "None policy: Start" Sep 4 23:48:37.588516 kubelet[2648]: I0904 23:48:37.588491 2648 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:48:37.588516 kubelet[2648]: I0904 23:48:37.588499 2648 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:48:37.588644 kubelet[2648]: I0904 23:48:37.588567 2648 state_mem.go:75] "Updated machine memory state" Sep 4 23:48:37.592110 kubelet[2648]: I0904 23:48:37.592085 2648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:48:37.592214 kubelet[2648]: I0904 23:48:37.592199 2648 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:48:37.592245 kubelet[2648]: I0904 23:48:37.592214 2648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:48:37.592507 kubelet[2648]: I0904 23:48:37.592485 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:48:37.595029 kubelet[2648]: E0904 23:48:37.594786 2648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:48:37.648769 kubelet[2648]: I0904 23:48:37.648716 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.650320 kubelet[2648]: I0904 23:48:37.650017 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.650320 kubelet[2648]: I0904 23:48:37.650135 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.656460 kubelet[2648]: E0904 23:48:37.656442 2648 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-de0727ed16\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.699720 kubelet[2648]: I0904 23:48:37.699690 2648 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.708396 kubelet[2648]: I0904 23:48:37.708365 2648 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.708553 kubelet[2648]: I0904 23:48:37.708448 2648 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745774 kubelet[2648]: I0904 23:48:37.745720 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745774 kubelet[2648]: I0904 23:48:37.745766 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745952 kubelet[2648]: I0904 23:48:37.745794 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745952 kubelet[2648]: I0904 23:48:37.745814 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cbee3914a56c0915a68f1dac9656bc2-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" (UID: \"8cbee3914a56c0915a68f1dac9656bc2\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745952 kubelet[2648]: I0904 23:48:37.745838 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745952 kubelet[2648]: I0904 23:48:37.745857 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.745952 kubelet[2648]: I0904 23:48:37.745879 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.746146 kubelet[2648]: I0904 23:48:37.745898 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae75a67538596f2525ee56079db8fc3d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-n-de0727ed16\" (UID: \"ae75a67538596f2525ee56079db8fc3d\") " pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.746146 kubelet[2648]: I0904 23:48:37.745927 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2a8aab8e63fe65cbd547abb55213d2-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-de0727ed16\" (UID: \"eb2a8aab8e63fe65cbd547abb55213d2\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:37.957553 sudo[2679]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:48:37.957872 sudo[2679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:48:38.483224 sudo[2679]: pam_unix(sudo:session): session closed for user root Sep 4 23:48:38.510972 kubelet[2648]: I0904 23:48:38.510926 2648 apiserver.go:52] "Watching apiserver" Sep 4 23:48:38.543399 kubelet[2648]: I0904 23:48:38.543336 2648 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:48:38.571064 kubelet[2648]: I0904 23:48:38.571016 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:38.571956 kubelet[2648]: I0904 23:48:38.571547 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:38.583580 kubelet[2648]: E0904 23:48:38.583419 2648 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-de0727ed16\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:38.583747 kubelet[2648]: E0904 23:48:38.583375 2648 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-n-de0727ed16\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" Sep 4 23:48:38.595284 kubelet[2648]: I0904 23:48:38.595087 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-2-n-de0727ed16" podStartSLOduration=1.595049723 podStartE2EDuration="1.595049723s" podCreationTimestamp="2025-09-04 23:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:38.594771912 +0000 UTC m=+1.166799579" watchObservedRunningTime="2025-09-04 23:48:38.595049723 +0000 UTC m=+1.167077390" Sep 4 23:48:38.612657 kubelet[2648]: I0904 23:48:38.612530 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-de0727ed16" podStartSLOduration=1.612516445 podStartE2EDuration="1.612516445s" podCreationTimestamp="2025-09-04 23:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:38.604791169 +0000 UTC m=+1.176818836" watchObservedRunningTime="2025-09-04 23:48:38.612516445 +0000 UTC m=+1.184544112" Sep 4 23:48:38.623088 kubelet[2648]: I0904 23:48:38.622897 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-2-n-de0727ed16" podStartSLOduration=2.622883444 podStartE2EDuration="2.622883444s" podCreationTimestamp="2025-09-04 23:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:38.613503886 +0000 UTC m=+1.185531554" watchObservedRunningTime="2025-09-04 23:48:38.622883444 +0000 UTC m=+1.194911111" Sep 4 23:48:39.239979 update_engine[1504]: I20250904 23:48:39.239720 1504 update_attempter.cc:509] Updating boot flags... Sep 4 23:48:39.285784 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2709) Sep 4 23:48:39.384848 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2709) Sep 4 23:48:39.842996 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 4 23:48:40.002991 sshd[1748]: Connection closed by 139.178.68.195 port 48124 Sep 4 23:48:40.005581 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:40.011973 systemd[1]: sshd@6-46.62.204.39:22-139.178.68.195:48124.service: Deactivated successfully. Sep 4 23:48:40.015939 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:48:40.016507 systemd[1]: session-7.scope: Consumed 3.868s CPU time, 211.5M memory peak. Sep 4 23:48:40.018953 systemd-logind[1502]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:48:40.020747 systemd-logind[1502]: Removed session 7. Sep 4 23:48:42.384202 kubelet[2648]: I0904 23:48:42.384164 2648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:48:42.384536 containerd[1526]: time="2025-09-04T23:48:42.384477947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:48:42.384761 kubelet[2648]: I0904 23:48:42.384636 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:48:43.196780 systemd[1]: Created slice kubepods-besteffort-pod8e844381_acc8_4cc5_a46f_4ae066bd89b6.slice - libcontainer container kubepods-besteffort-pod8e844381_acc8_4cc5_a46f_4ae066bd89b6.slice. Sep 4 23:48:43.217559 systemd[1]: Created slice kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice - libcontainer container kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice. Sep 4 23:48:43.279981 kubelet[2648]: I0904 23:48:43.279918 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e844381-acc8-4cc5-a46f-4ae066bd89b6-xtables-lock\") pod \"kube-proxy-48s5g\" (UID: \"8e844381-acc8-4cc5-a46f-4ae066bd89b6\") " pod="kube-system/kube-proxy-48s5g" Sep 4 23:48:43.279981 kubelet[2648]: I0904 23:48:43.279984 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-kernel\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280162 kubelet[2648]: I0904 23:48:43.280044 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-net\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280162 kubelet[2648]: I0904 23:48:43.280076 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e844381-acc8-4cc5-a46f-4ae066bd89b6-lib-modules\") pod \"kube-proxy-48s5g\" (UID: \"8e844381-acc8-4cc5-a46f-4ae066bd89b6\") " pod="kube-system/kube-proxy-48s5g" Sep 4 23:48:43.280162 kubelet[2648]: I0904 23:48:43.280102 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-bpf-maps\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280162 kubelet[2648]: I0904 23:48:43.280128 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-etc-cni-netd\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280162 kubelet[2648]: I0904 23:48:43.280155 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e844381-acc8-4cc5-a46f-4ae066bd89b6-kube-proxy\") pod \"kube-proxy-48s5g\" (UID: \"8e844381-acc8-4cc5-a46f-4ae066bd89b6\") " pod="kube-system/kube-proxy-48s5g" Sep 4 23:48:43.280281 kubelet[2648]: I0904 23:48:43.280180 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cni-path\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280281 kubelet[2648]: I0904 23:48:43.280204 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-xtables-lock\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280281 kubelet[2648]: I0904 23:48:43.280234 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj98t\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-kube-api-access-qj98t\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280281 kubelet[2648]: I0904 23:48:43.280260 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-lib-modules\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280285 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-hostproc\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280307 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c9e833a-5c7d-4c35-8178-81620790e118-clustermesh-secrets\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280326 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgsf\" (UniqueName: \"kubernetes.io/projected/8e844381-acc8-4cc5-a46f-4ae066bd89b6-kube-api-access-4hgsf\") pod \"kube-proxy-48s5g\" (UID: \"8e844381-acc8-4cc5-a46f-4ae066bd89b6\") " pod="kube-system/kube-proxy-48s5g" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280351 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-run\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280371 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-cgroup\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280398 kubelet[2648]: I0904 23:48:43.280391 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-config-path\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.280636 kubelet[2648]: I0904 23:48:43.280412 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-hubble-tls\") pod \"cilium-jp5bn\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " pod="kube-system/cilium-jp5bn" Sep 4 23:48:43.435836 systemd[1]: Created slice kubepods-besteffort-podc0195b7e_6bfc_4d23_8193_e077f68c3ea6.slice - libcontainer container kubepods-besteffort-podc0195b7e_6bfc_4d23_8193_e077f68c3ea6.slice. Sep 4 23:48:43.482668 kubelet[2648]: I0904 23:48:43.482529 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b2kj\" (UniqueName: \"kubernetes.io/projected/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-kube-api-access-6b2kj\") pod \"cilium-operator-6c4d7847fc-bkjcm\" (UID: \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\") " pod="kube-system/cilium-operator-6c4d7847fc-bkjcm" Sep 4 23:48:43.483295 kubelet[2648]: I0904 23:48:43.483136 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bkjcm\" (UID: \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\") " pod="kube-system/cilium-operator-6c4d7847fc-bkjcm" Sep 4 23:48:43.511414 containerd[1526]: time="2025-09-04T23:48:43.511344537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48s5g,Uid:8e844381-acc8-4cc5-a46f-4ae066bd89b6,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:43.529293 containerd[1526]: time="2025-09-04T23:48:43.529248618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jp5bn,Uid:4c9e833a-5c7d-4c35-8178-81620790e118,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:43.539283 containerd[1526]: time="2025-09-04T23:48:43.539146829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:43.539623 containerd[1526]: time="2025-09-04T23:48:43.539553962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:43.539842 containerd[1526]: time="2025-09-04T23:48:43.539736474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.540646 containerd[1526]: time="2025-09-04T23:48:43.540524231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.558550 containerd[1526]: time="2025-09-04T23:48:43.558478958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:43.558850 containerd[1526]: time="2025-09-04T23:48:43.558653195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:43.559555 containerd[1526]: time="2025-09-04T23:48:43.558696306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.559555 containerd[1526]: time="2025-09-04T23:48:43.559330104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.564265 systemd[1]: Started cri-containerd-c39a22f6a6c0b6aa2ffd81870d8f89b9969b0b503cf825bd9630b1d11298945e.scope - libcontainer container c39a22f6a6c0b6aa2ffd81870d8f89b9969b0b503cf825bd9630b1d11298945e. Sep 4 23:48:43.589570 systemd[1]: Started cri-containerd-dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192.scope - libcontainer container dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192. Sep 4 23:48:43.617232 containerd[1526]: time="2025-09-04T23:48:43.617184547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48s5g,Uid:8e844381-acc8-4cc5-a46f-4ae066bd89b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c39a22f6a6c0b6aa2ffd81870d8f89b9969b0b503cf825bd9630b1d11298945e\"" Sep 4 23:48:43.622774 containerd[1526]: time="2025-09-04T23:48:43.622705259Z" level=info msg="CreateContainer within sandbox \"c39a22f6a6c0b6aa2ffd81870d8f89b9969b0b503cf825bd9630b1d11298945e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:48:43.627624 containerd[1526]: time="2025-09-04T23:48:43.627582755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jp5bn,Uid:4c9e833a-5c7d-4c35-8178-81620790e118,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\"" Sep 4 23:48:43.629779 containerd[1526]: time="2025-09-04T23:48:43.629746662Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:48:43.642075 containerd[1526]: time="2025-09-04T23:48:43.642025507Z" level=info msg="CreateContainer within sandbox \"c39a22f6a6c0b6aa2ffd81870d8f89b9969b0b503cf825bd9630b1d11298945e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a870b441cdd53c10c50e9a022659263103ec90700e1e4f7b75e3c9139db6ccd7\"" Sep 4 23:48:43.645120 containerd[1526]: time="2025-09-04T23:48:43.643764487Z" level=info msg="StartContainer for \"a870b441cdd53c10c50e9a022659263103ec90700e1e4f7b75e3c9139db6ccd7\"" Sep 4 23:48:43.669188 systemd[1]: Started cri-containerd-a870b441cdd53c10c50e9a022659263103ec90700e1e4f7b75e3c9139db6ccd7.scope - libcontainer container a870b441cdd53c10c50e9a022659263103ec90700e1e4f7b75e3c9139db6ccd7. Sep 4 23:48:43.699866 containerd[1526]: time="2025-09-04T23:48:43.699813154Z" level=info msg="StartContainer for \"a870b441cdd53c10c50e9a022659263103ec90700e1e4f7b75e3c9139db6ccd7\" returns successfully" Sep 4 23:48:43.743508 containerd[1526]: time="2025-09-04T23:48:43.743392870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bkjcm,Uid:c0195b7e-6bfc-4d23-8193-e077f68c3ea6,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:43.764605 containerd[1526]: time="2025-09-04T23:48:43.764223891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:43.764605 containerd[1526]: time="2025-09-04T23:48:43.764568176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:43.764862 containerd[1526]: time="2025-09-04T23:48:43.764602681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.764862 containerd[1526]: time="2025-09-04T23:48:43.764755478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:43.780171 systemd[1]: Started cri-containerd-a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0.scope - libcontainer container a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0. Sep 4 23:48:43.821354 containerd[1526]: time="2025-09-04T23:48:43.821235975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bkjcm,Uid:c0195b7e-6bfc-4d23-8193-e077f68c3ea6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\"" Sep 4 23:48:47.401595 kubelet[2648]: I0904 23:48:47.401166 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48s5g" podStartSLOduration=4.399966879 podStartE2EDuration="4.399966879s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:44.596502769 +0000 UTC m=+7.168530436" watchObservedRunningTime="2025-09-04 23:48:47.399966879 +0000 UTC m=+9.971994546" Sep 4 23:48:47.827139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916951589.mount: Deactivated successfully. Sep 4 23:48:49.298160 containerd[1526]: time="2025-09-04T23:48:49.298107386Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:49.299874 containerd[1526]: time="2025-09-04T23:48:49.299828774Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:48:49.300167 containerd[1526]: time="2025-09-04T23:48:49.300052113Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:49.302075 containerd[1526]: time="2025-09-04T23:48:49.302049237Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.672272638s" Sep 4 23:48:49.302451 containerd[1526]: time="2025-09-04T23:48:49.302078472Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:48:49.303848 containerd[1526]: time="2025-09-04T23:48:49.303681147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:48:49.305312 containerd[1526]: time="2025-09-04T23:48:49.305224481Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:48:49.397277 containerd[1526]: time="2025-09-04T23:48:49.397225703Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\"" Sep 4 23:48:49.397735 containerd[1526]: time="2025-09-04T23:48:49.397715772Z" level=info msg="StartContainer for \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\"" Sep 4 23:48:49.490214 systemd[1]: Started cri-containerd-39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc.scope - libcontainer container 39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc. Sep 4 23:48:49.518624 containerd[1526]: time="2025-09-04T23:48:49.518581277Z" level=info msg="StartContainer for \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\" returns successfully" Sep 4 23:48:49.530917 systemd[1]: cri-containerd-39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc.scope: Deactivated successfully. Sep 4 23:48:49.599176 containerd[1526]: time="2025-09-04T23:48:49.591207308Z" level=info msg="shim disconnected" id=39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc namespace=k8s.io Sep 4 23:48:49.599176 containerd[1526]: time="2025-09-04T23:48:49.598879634Z" level=warning msg="cleaning up after shim disconnected" id=39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc namespace=k8s.io Sep 4 23:48:49.599176 containerd[1526]: time="2025-09-04T23:48:49.598896236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:50.391412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc-rootfs.mount: Deactivated successfully. Sep 4 23:48:50.612590 containerd[1526]: time="2025-09-04T23:48:50.612212036Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:48:50.628959 containerd[1526]: time="2025-09-04T23:48:50.627466430Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\"" Sep 4 23:48:50.628959 containerd[1526]: time="2025-09-04T23:48:50.628177764Z" level=info msg="StartContainer for \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\"" Sep 4 23:48:50.629618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463155101.mount: Deactivated successfully. Sep 4 23:48:50.667203 systemd[1]: Started cri-containerd-ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b.scope - libcontainer container ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b. Sep 4 23:48:50.693090 containerd[1526]: time="2025-09-04T23:48:50.692989682Z" level=info msg="StartContainer for \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\" returns successfully" Sep 4 23:48:50.705835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:48:50.706273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:48:50.706463 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:48:50.713588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:48:50.714340 systemd[1]: cri-containerd-ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b.scope: Deactivated successfully. Sep 4 23:48:50.735651 containerd[1526]: time="2025-09-04T23:48:50.735591836Z" level=info msg="shim disconnected" id=ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b namespace=k8s.io Sep 4 23:48:50.736099 containerd[1526]: time="2025-09-04T23:48:50.736048753Z" level=warning msg="cleaning up after shim disconnected" id=ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b namespace=k8s.io Sep 4 23:48:50.736099 containerd[1526]: time="2025-09-04T23:48:50.736062408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:50.741146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:48:51.393766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b-rootfs.mount: Deactivated successfully. Sep 4 23:48:51.620617 containerd[1526]: time="2025-09-04T23:48:51.620393659Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:48:51.646337 containerd[1526]: time="2025-09-04T23:48:51.646147981Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\"" Sep 4 23:48:51.651262 containerd[1526]: time="2025-09-04T23:48:51.648644793Z" level=info msg="StartContainer for \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\"" Sep 4 23:48:51.708310 systemd[1]: Started cri-containerd-f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5.scope - libcontainer container f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5. Sep 4 23:48:51.748072 containerd[1526]: time="2025-09-04T23:48:51.747992880Z" level=info msg="StartContainer for \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\" returns successfully" Sep 4 23:48:51.755120 systemd[1]: cri-containerd-f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5.scope: Deactivated successfully. Sep 4 23:48:51.777264 containerd[1526]: time="2025-09-04T23:48:51.777204445Z" level=info msg="shim disconnected" id=f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5 namespace=k8s.io Sep 4 23:48:51.777264 containerd[1526]: time="2025-09-04T23:48:51.777249249Z" level=warning msg="cleaning up after shim disconnected" id=f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5 namespace=k8s.io Sep 4 23:48:51.777264 containerd[1526]: time="2025-09-04T23:48:51.777256783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:52.391762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5-rootfs.mount: Deactivated successfully. Sep 4 23:48:52.566996 containerd[1526]: time="2025-09-04T23:48:52.566947117Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:52.567880 containerd[1526]: time="2025-09-04T23:48:52.567710677Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:48:52.568809 containerd[1526]: time="2025-09-04T23:48:52.568680916Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:52.569658 containerd[1526]: time="2025-09-04T23:48:52.569627900Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.265921165s" Sep 4 23:48:52.569700 containerd[1526]: time="2025-09-04T23:48:52.569659249Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:48:52.572048 containerd[1526]: time="2025-09-04T23:48:52.571285683Z" level=info msg="CreateContainer within sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:48:52.588503 containerd[1526]: time="2025-09-04T23:48:52.588419082Z" level=info msg="CreateContainer within sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\"" Sep 4 23:48:52.594575 containerd[1526]: time="2025-09-04T23:48:52.593882258Z" level=info msg="StartContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\"" Sep 4 23:48:52.623120 systemd[1]: Started cri-containerd-0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e.scope - libcontainer container 0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e. Sep 4 23:48:52.627092 containerd[1526]: time="2025-09-04T23:48:52.627048009Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:48:52.649834 containerd[1526]: time="2025-09-04T23:48:52.649743180Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\"" Sep 4 23:48:52.651385 containerd[1526]: time="2025-09-04T23:48:52.651342815Z" level=info msg="StartContainer for \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\"" Sep 4 23:48:52.652238 containerd[1526]: time="2025-09-04T23:48:52.652216320Z" level=info msg="StartContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" returns successfully" Sep 4 23:48:52.687134 systemd[1]: Started cri-containerd-78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826.scope - libcontainer container 78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826. Sep 4 23:48:52.716426 systemd[1]: cri-containerd-78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826.scope: Deactivated successfully. Sep 4 23:48:52.719334 containerd[1526]: time="2025-09-04T23:48:52.719184043Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice/cri-containerd-78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826.scope/memory.events\": no such file or directory" Sep 4 23:48:52.721151 containerd[1526]: time="2025-09-04T23:48:52.721122406Z" level=info msg="StartContainer for \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\" returns successfully" Sep 4 23:48:52.743063 containerd[1526]: time="2025-09-04T23:48:52.742955931Z" level=info msg="shim disconnected" id=78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826 namespace=k8s.io Sep 4 23:48:52.743063 containerd[1526]: time="2025-09-04T23:48:52.743020023Z" level=warning msg="cleaning up after shim disconnected" id=78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826 namespace=k8s.io Sep 4 23:48:52.743063 containerd[1526]: time="2025-09-04T23:48:52.743030042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:53.391883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716836487.mount: Deactivated successfully. Sep 4 23:48:53.631748 containerd[1526]: time="2025-09-04T23:48:53.631709203Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:48:53.661597 containerd[1526]: time="2025-09-04T23:48:53.661559970Z" level=info msg="CreateContainer within sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\"" Sep 4 23:48:53.662169 containerd[1526]: time="2025-09-04T23:48:53.662138290Z" level=info msg="StartContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\"" Sep 4 23:48:53.664380 kubelet[2648]: I0904 23:48:53.664236 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bkjcm" podStartSLOduration=1.916235305 podStartE2EDuration="10.664220693s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="2025-09-04 23:48:43.822284991 +0000 UTC m=+6.394312669" lastFinishedPulling="2025-09-04 23:48:52.570270391 +0000 UTC m=+15.142298057" observedRunningTime="2025-09-04 23:48:53.664104014 +0000 UTC m=+16.236131681" watchObservedRunningTime="2025-09-04 23:48:53.664220693 +0000 UTC m=+16.236248360" Sep 4 23:48:53.706275 systemd[1]: Started cri-containerd-db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd.scope - libcontainer container db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd. Sep 4 23:48:53.738685 containerd[1526]: time="2025-09-04T23:48:53.738605771Z" level=info msg="StartContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" returns successfully" Sep 4 23:48:53.957912 kubelet[2648]: I0904 23:48:53.956999 2648 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:48:54.008117 systemd[1]: Created slice kubepods-burstable-pod60c2b503_79f0_4640_8726_5f9056887d04.slice - libcontainer container kubepods-burstable-pod60c2b503_79f0_4640_8726_5f9056887d04.slice. Sep 4 23:48:54.022078 systemd[1]: Created slice kubepods-burstable-podb59a6832_e9b6_48e6_9b7d_6a6dd960433e.slice - libcontainer container kubepods-burstable-podb59a6832_e9b6_48e6_9b7d_6a6dd960433e.slice. Sep 4 23:48:54.167175 kubelet[2648]: I0904 23:48:54.167035 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86br\" (UniqueName: \"kubernetes.io/projected/60c2b503-79f0-4640-8726-5f9056887d04-kube-api-access-r86br\") pod \"coredns-668d6bf9bc-s2cc7\" (UID: \"60c2b503-79f0-4640-8726-5f9056887d04\") " pod="kube-system/coredns-668d6bf9bc-s2cc7" Sep 4 23:48:54.167175 kubelet[2648]: I0904 23:48:54.167087 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b59a6832-e9b6-48e6-9b7d-6a6dd960433e-config-volume\") pod \"coredns-668d6bf9bc-qwx4q\" (UID: \"b59a6832-e9b6-48e6-9b7d-6a6dd960433e\") " pod="kube-system/coredns-668d6bf9bc-qwx4q" Sep 4 23:48:54.167175 kubelet[2648]: I0904 23:48:54.167107 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggkg\" (UniqueName: \"kubernetes.io/projected/b59a6832-e9b6-48e6-9b7d-6a6dd960433e-kube-api-access-sggkg\") pod \"coredns-668d6bf9bc-qwx4q\" (UID: \"b59a6832-e9b6-48e6-9b7d-6a6dd960433e\") " pod="kube-system/coredns-668d6bf9bc-qwx4q" Sep 4 23:48:54.167175 kubelet[2648]: I0904 23:48:54.167123 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60c2b503-79f0-4640-8726-5f9056887d04-config-volume\") pod \"coredns-668d6bf9bc-s2cc7\" (UID: \"60c2b503-79f0-4640-8726-5f9056887d04\") " pod="kube-system/coredns-668d6bf9bc-s2cc7" Sep 4 23:48:54.334357 containerd[1526]: time="2025-09-04T23:48:54.334266126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwx4q,Uid:b59a6832-e9b6-48e6-9b7d-6a6dd960433e,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:54.335021 containerd[1526]: time="2025-09-04T23:48:54.334966125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s2cc7,Uid:60c2b503-79f0-4640-8726-5f9056887d04,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:56.792100 systemd-networkd[1423]: cilium_host: Link UP Sep 4 23:48:56.792255 systemd-networkd[1423]: cilium_net: Link UP Sep 4 23:48:56.792403 systemd-networkd[1423]: cilium_net: Gained carrier Sep 4 23:48:56.792559 systemd-networkd[1423]: cilium_host: Gained carrier Sep 4 23:48:56.884784 systemd-networkd[1423]: cilium_vxlan: Link UP Sep 4 23:48:56.885636 systemd-networkd[1423]: cilium_vxlan: Gained carrier Sep 4 23:48:57.086969 systemd-networkd[1423]: cilium_net: Gained IPv6LL Sep 4 23:48:57.219106 kernel: NET: Registered PF_ALG protocol family Sep 4 23:48:57.639132 systemd-networkd[1423]: cilium_host: Gained IPv6LL Sep 4 23:48:57.876902 systemd-networkd[1423]: lxc_health: Link UP Sep 4 23:48:57.889645 systemd-networkd[1423]: lxc_health: Gained carrier Sep 4 23:48:58.439840 kernel: eth0: renamed from tmp6d605 Sep 4 23:48:58.434812 systemd-networkd[1423]: lxc0e67e54d6c7e: Link UP Sep 4 23:48:58.444550 systemd-networkd[1423]: lxc0e67e54d6c7e: Gained carrier Sep 4 23:48:58.446726 systemd-networkd[1423]: lxc1d72af575ce5: Link UP Sep 4 23:48:58.454030 kernel: eth0: renamed from tmpcd754 Sep 4 23:48:58.456908 systemd-networkd[1423]: lxc1d72af575ce5: Gained carrier Sep 4 23:48:58.535086 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL Sep 4 23:48:59.550706 kubelet[2648]: I0904 23:48:59.550356 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jp5bn" podStartSLOduration=10.876031768 podStartE2EDuration="16.550329975s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="2025-09-04 23:48:43.629225536 +0000 UTC m=+6.201253202" lastFinishedPulling="2025-09-04 23:48:49.303523742 +0000 UTC m=+11.875551409" observedRunningTime="2025-09-04 23:48:54.660379645 +0000 UTC m=+17.232407512" watchObservedRunningTime="2025-09-04 23:48:59.550329975 +0000 UTC m=+22.122357652" Sep 4 23:48:59.750299 systemd-networkd[1423]: lxc0e67e54d6c7e: Gained IPv6LL Sep 4 23:48:59.815173 systemd-networkd[1423]: lxc_health: Gained IPv6LL Sep 4 23:49:00.391118 systemd-networkd[1423]: lxc1d72af575ce5: Gained IPv6LL Sep 4 23:49:01.627053 containerd[1526]: time="2025-09-04T23:49:01.626144799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:01.627053 containerd[1526]: time="2025-09-04T23:49:01.626182078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:01.627053 containerd[1526]: time="2025-09-04T23:49:01.626191336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.627053 containerd[1526]: time="2025-09-04T23:49:01.626254544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.654314 systemd[1]: Started cri-containerd-6d60586337b6c35f0298261838c20d67974b21963a112ed76f616e8a07a28beb.scope - libcontainer container 6d60586337b6c35f0298261838c20d67974b21963a112ed76f616e8a07a28beb. Sep 4 23:49:01.704031 containerd[1526]: time="2025-09-04T23:49:01.703725589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:01.704274 containerd[1526]: time="2025-09-04T23:49:01.704126363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:01.704384 containerd[1526]: time="2025-09-04T23:49:01.704307895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.707732 containerd[1526]: time="2025-09-04T23:49:01.706181778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.730859 containerd[1526]: time="2025-09-04T23:49:01.730825437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s2cc7,Uid:60c2b503-79f0-4640-8726-5f9056887d04,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d60586337b6c35f0298261838c20d67974b21963a112ed76f616e8a07a28beb\"" Sep 4 23:49:01.734734 containerd[1526]: time="2025-09-04T23:49:01.734700735Z" level=info msg="CreateContainer within sandbox \"6d60586337b6c35f0298261838c20d67974b21963a112ed76f616e8a07a28beb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:49:01.750563 systemd[1]: Started cri-containerd-cd754416846987336171f236d134fba33629b6c25591064d316d3125edf959bb.scope - libcontainer container cd754416846987336171f236d134fba33629b6c25591064d316d3125edf959bb. Sep 4 23:49:01.760659 containerd[1526]: time="2025-09-04T23:49:01.760621726Z" level=info msg="CreateContainer within sandbox \"6d60586337b6c35f0298261838c20d67974b21963a112ed76f616e8a07a28beb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2a14faf5f974056cf2c243dd6939fbc47cc7330b0bfe8bdd77ac30598ac3f25\"" Sep 4 23:49:01.763066 containerd[1526]: time="2025-09-04T23:49:01.762267790Z" level=info msg="StartContainer for \"b2a14faf5f974056cf2c243dd6939fbc47cc7330b0bfe8bdd77ac30598ac3f25\"" Sep 4 23:49:01.800147 systemd[1]: Started cri-containerd-b2a14faf5f974056cf2c243dd6939fbc47cc7330b0bfe8bdd77ac30598ac3f25.scope - libcontainer container b2a14faf5f974056cf2c243dd6939fbc47cc7330b0bfe8bdd77ac30598ac3f25. Sep 4 23:49:01.812343 containerd[1526]: time="2025-09-04T23:49:01.812307505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwx4q,Uid:b59a6832-e9b6-48e6-9b7d-6a6dd960433e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd754416846987336171f236d134fba33629b6c25591064d316d3125edf959bb\"" Sep 4 23:49:01.816928 containerd[1526]: time="2025-09-04T23:49:01.816888749Z" level=info msg="CreateContainer within sandbox \"cd754416846987336171f236d134fba33629b6c25591064d316d3125edf959bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:49:01.839453 containerd[1526]: time="2025-09-04T23:49:01.839401451Z" level=info msg="CreateContainer within sandbox \"cd754416846987336171f236d134fba33629b6c25591064d316d3125edf959bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3025a3225388f8a867a1fc33e5da79304cd4d7d0c1d008dba42255145efa5ef9\"" Sep 4 23:49:01.840503 containerd[1526]: time="2025-09-04T23:49:01.840387675Z" level=info msg="StartContainer for \"3025a3225388f8a867a1fc33e5da79304cd4d7d0c1d008dba42255145efa5ef9\"" Sep 4 23:49:01.843790 containerd[1526]: time="2025-09-04T23:49:01.843599875Z" level=info msg="StartContainer for \"b2a14faf5f974056cf2c243dd6939fbc47cc7330b0bfe8bdd77ac30598ac3f25\" returns successfully" Sep 4 23:49:01.875131 systemd[1]: Started cri-containerd-3025a3225388f8a867a1fc33e5da79304cd4d7d0c1d008dba42255145efa5ef9.scope - libcontainer container 3025a3225388f8a867a1fc33e5da79304cd4d7d0c1d008dba42255145efa5ef9. Sep 4 23:49:01.902837 containerd[1526]: time="2025-09-04T23:49:01.902248147Z" level=info msg="StartContainer for \"3025a3225388f8a867a1fc33e5da79304cd4d7d0c1d008dba42255145efa5ef9\" returns successfully" Sep 4 23:49:02.632747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447268082.mount: Deactivated successfully. Sep 4 23:49:02.676533 kubelet[2648]: I0904 23:49:02.675861 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s2cc7" podStartSLOduration=19.675842373000002 podStartE2EDuration="19.675842373s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:02.6634788 +0000 UTC m=+25.235506477" watchObservedRunningTime="2025-09-04 23:49:02.675842373 +0000 UTC m=+25.247870050" Sep 4 23:50:01.487373 systemd[1]: Started sshd@7-46.62.204.39:22-139.178.68.195:33306.service - OpenSSH per-connection server daemon (139.178.68.195:33306). Sep 4 23:50:02.606377 sshd[4038]: Accepted publickey for core from 139.178.68.195 port 33306 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:02.607393 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:02.612905 systemd-logind[1502]: New session 8 of user core. Sep 4 23:50:02.623181 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:50:03.946091 sshd[4040]: Connection closed by 139.178.68.195 port 33306 Sep 4 23:50:03.946622 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:03.950306 systemd[1]: sshd@7-46.62.204.39:22-139.178.68.195:33306.service: Deactivated successfully. Sep 4 23:50:03.952210 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:50:03.952984 systemd-logind[1502]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:50:03.954368 systemd-logind[1502]: Removed session 8. Sep 4 23:50:09.135593 systemd[1]: Started sshd@8-46.62.204.39:22-139.178.68.195:33316.service - OpenSSH per-connection server daemon (139.178.68.195:33316). Sep 4 23:50:10.239716 sshd[4052]: Accepted publickey for core from 139.178.68.195 port 33316 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:10.241333 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:10.246125 systemd-logind[1502]: New session 9 of user core. Sep 4 23:50:10.252145 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:50:11.080017 sshd[4054]: Connection closed by 139.178.68.195 port 33316 Sep 4 23:50:11.080679 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:11.084440 systemd[1]: sshd@8-46.62.204.39:22-139.178.68.195:33316.service: Deactivated successfully. Sep 4 23:50:11.086675 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:50:11.087936 systemd-logind[1502]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:50:11.089313 systemd-logind[1502]: Removed session 9. Sep 4 23:50:16.235532 systemd[1]: Started sshd@9-46.62.204.39:22-139.178.68.195:52318.service - OpenSSH per-connection server daemon (139.178.68.195:52318). Sep 4 23:50:17.223693 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 52318 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:17.225288 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:17.231191 systemd-logind[1502]: New session 10 of user core. Sep 4 23:50:17.242227 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:50:17.980417 sshd[4070]: Connection closed by 139.178.68.195 port 52318 Sep 4 23:50:17.981194 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:17.985068 systemd-logind[1502]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:50:17.985850 systemd[1]: sshd@9-46.62.204.39:22-139.178.68.195:52318.service: Deactivated successfully. Sep 4 23:50:17.988505 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:50:17.989650 systemd-logind[1502]: Removed session 10. Sep 4 23:50:23.157311 systemd[1]: Started sshd@10-46.62.204.39:22-139.178.68.195:59184.service - OpenSSH per-connection server daemon (139.178.68.195:59184). Sep 4 23:50:24.155468 sshd[4083]: Accepted publickey for core from 139.178.68.195 port 59184 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:24.156973 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:24.162950 systemd-logind[1502]: New session 11 of user core. Sep 4 23:50:24.171317 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:50:24.938454 sshd[4088]: Connection closed by 139.178.68.195 port 59184 Sep 4 23:50:24.939860 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:24.945971 systemd[1]: sshd@10-46.62.204.39:22-139.178.68.195:59184.service: Deactivated successfully. Sep 4 23:50:24.948439 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:50:24.949886 systemd-logind[1502]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:50:24.951497 systemd-logind[1502]: Removed session 11. Sep 4 23:50:25.114413 systemd[1]: Started sshd@11-46.62.204.39:22-139.178.68.195:59194.service - OpenSSH per-connection server daemon (139.178.68.195:59194). Sep 4 23:50:26.109039 sshd[4100]: Accepted publickey for core from 139.178.68.195 port 59194 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:26.110570 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:26.115717 systemd-logind[1502]: New session 12 of user core. Sep 4 23:50:26.123302 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:50:26.924887 sshd[4102]: Connection closed by 139.178.68.195 port 59194 Sep 4 23:50:26.925608 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:26.929554 systemd[1]: sshd@11-46.62.204.39:22-139.178.68.195:59194.service: Deactivated successfully. Sep 4 23:50:26.932146 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:50:26.933160 systemd-logind[1502]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:50:26.934941 systemd-logind[1502]: Removed session 12. Sep 4 23:50:27.102940 systemd[1]: Started sshd@12-46.62.204.39:22-139.178.68.195:59196.service - OpenSSH per-connection server daemon (139.178.68.195:59196). Sep 4 23:50:28.097040 sshd[4112]: Accepted publickey for core from 139.178.68.195 port 59196 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:28.098428 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:28.102736 systemd-logind[1502]: New session 13 of user core. Sep 4 23:50:28.108159 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:50:28.850203 sshd[4114]: Connection closed by 139.178.68.195 port 59196 Sep 4 23:50:28.850766 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:28.853343 systemd[1]: sshd@12-46.62.204.39:22-139.178.68.195:59196.service: Deactivated successfully. Sep 4 23:50:28.855414 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:50:28.857442 systemd-logind[1502]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:50:28.858815 systemd-logind[1502]: Removed session 13. Sep 4 23:50:34.028420 systemd[1]: Started sshd@13-46.62.204.39:22-139.178.68.195:49920.service - OpenSSH per-connection server daemon (139.178.68.195:49920). Sep 4 23:50:35.021438 sshd[4126]: Accepted publickey for core from 139.178.68.195 port 49920 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:35.023090 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:35.028731 systemd-logind[1502]: New session 14 of user core. Sep 4 23:50:35.035461 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:50:35.765409 sshd[4128]: Connection closed by 139.178.68.195 port 49920 Sep 4 23:50:35.766119 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:35.769414 systemd[1]: sshd@13-46.62.204.39:22-139.178.68.195:49920.service: Deactivated successfully. Sep 4 23:50:35.771967 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:50:35.773495 systemd-logind[1502]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:50:35.774768 systemd-logind[1502]: Removed session 14. Sep 4 23:50:35.980356 systemd[1]: Started sshd@14-46.62.204.39:22-139.178.68.195:49934.service - OpenSSH per-connection server daemon (139.178.68.195:49934). Sep 4 23:50:37.079198 sshd[4140]: Accepted publickey for core from 139.178.68.195 port 49934 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:37.080615 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:37.085205 systemd-logind[1502]: New session 15 of user core. Sep 4 23:50:37.092293 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:50:38.088958 sshd[4142]: Connection closed by 139.178.68.195 port 49934 Sep 4 23:50:38.089811 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:38.099396 systemd[1]: sshd@14-46.62.204.39:22-139.178.68.195:49934.service: Deactivated successfully. Sep 4 23:50:38.101794 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:50:38.102846 systemd-logind[1502]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:50:38.103808 systemd-logind[1502]: Removed session 15. Sep 4 23:50:38.243371 systemd[1]: Started sshd@15-46.62.204.39:22-139.178.68.195:49938.service - OpenSSH per-connection server daemon (139.178.68.195:49938). Sep 4 23:50:39.233584 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 49938 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:39.234991 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:39.240401 systemd-logind[1502]: New session 16 of user core. Sep 4 23:50:39.246238 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:50:40.577656 sshd[4156]: Connection closed by 139.178.68.195 port 49938 Sep 4 23:50:40.579805 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:40.582860 systemd[1]: sshd@15-46.62.204.39:22-139.178.68.195:49938.service: Deactivated successfully. Sep 4 23:50:40.584803 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:50:40.586265 systemd-logind[1502]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:50:40.587731 systemd-logind[1502]: Removed session 16. Sep 4 23:50:40.758442 systemd[1]: Started sshd@16-46.62.204.39:22-139.178.68.195:41450.service - OpenSSH per-connection server daemon (139.178.68.195:41450). Sep 4 23:50:41.752508 sshd[4173]: Accepted publickey for core from 139.178.68.195 port 41450 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:41.753803 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:41.759223 systemd-logind[1502]: New session 17 of user core. Sep 4 23:50:41.764203 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:50:42.620330 sshd[4175]: Connection closed by 139.178.68.195 port 41450 Sep 4 23:50:42.620948 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:42.624394 systemd-logind[1502]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:50:42.624612 systemd[1]: sshd@16-46.62.204.39:22-139.178.68.195:41450.service: Deactivated successfully. Sep 4 23:50:42.626492 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:50:42.627451 systemd-logind[1502]: Removed session 17. Sep 4 23:50:42.794275 systemd[1]: Started sshd@17-46.62.204.39:22-139.178.68.195:41462.service - OpenSSH per-connection server daemon (139.178.68.195:41462). Sep 4 23:50:43.782584 sshd[4185]: Accepted publickey for core from 139.178.68.195 port 41462 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:43.783791 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:43.788218 systemd-logind[1502]: New session 18 of user core. Sep 4 23:50:43.796226 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:50:44.539265 sshd[4187]: Connection closed by 139.178.68.195 port 41462 Sep 4 23:50:44.539957 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:44.543284 systemd[1]: sshd@17-46.62.204.39:22-139.178.68.195:41462.service: Deactivated successfully. Sep 4 23:50:44.545123 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:50:44.545974 systemd-logind[1502]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:50:44.547661 systemd-logind[1502]: Removed session 18. Sep 4 23:50:49.752193 systemd[1]: Started sshd@18-46.62.204.39:22-139.178.68.195:41474.service - OpenSSH per-connection server daemon (139.178.68.195:41474). Sep 4 23:50:50.848607 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 41474 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:50.850091 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:50.856921 systemd-logind[1502]: New session 19 of user core. Sep 4 23:50:50.860177 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:50:51.667991 sshd[4205]: Connection closed by 139.178.68.195 port 41474 Sep 4 23:50:51.668699 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:51.672343 systemd-logind[1502]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:50:51.673087 systemd[1]: sshd@18-46.62.204.39:22-139.178.68.195:41474.service: Deactivated successfully. Sep 4 23:50:51.675258 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:50:51.676662 systemd-logind[1502]: Removed session 19. Sep 4 23:50:56.822284 systemd[1]: Started sshd@19-46.62.204.39:22-139.178.68.195:41578.service - OpenSSH per-connection server daemon (139.178.68.195:41578). Sep 4 23:50:57.815584 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 41578 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:57.817111 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:57.822396 systemd-logind[1502]: New session 20 of user core. Sep 4 23:50:57.830150 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:50:58.573574 sshd[4219]: Connection closed by 139.178.68.195 port 41578 Sep 4 23:50:58.574177 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:58.577949 systemd[1]: sshd@19-46.62.204.39:22-139.178.68.195:41578.service: Deactivated successfully. Sep 4 23:50:58.579608 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:50:58.580784 systemd-logind[1502]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:50:58.582134 systemd-logind[1502]: Removed session 20. Sep 4 23:50:58.784246 systemd[1]: Started sshd@20-46.62.204.39:22-139.178.68.195:41588.service - OpenSSH per-connection server daemon (139.178.68.195:41588). Sep 4 23:50:59.886366 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 41588 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:50:59.887753 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:59.893101 systemd-logind[1502]: New session 21 of user core. Sep 4 23:50:59.897169 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:51:01.770688 kubelet[2648]: I0904 23:51:01.768739 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qwx4q" podStartSLOduration=138.768721754 podStartE2EDuration="2m18.768721754s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:02.694035487 +0000 UTC m=+25.266063184" watchObservedRunningTime="2025-09-04 23:51:01.768721754 +0000 UTC m=+144.340749431" Sep 4 23:51:01.801271 systemd[1]: run-containerd-runc-k8s.io-db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd-runc.lHK98U.mount: Deactivated successfully. Sep 4 23:51:01.815640 containerd[1526]: time="2025-09-04T23:51:01.815555239Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:51:01.848510 containerd[1526]: time="2025-09-04T23:51:01.848430741Z" level=info msg="StopContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" with timeout 30 (s)" Sep 4 23:51:01.848760 containerd[1526]: time="2025-09-04T23:51:01.848729832Z" level=info msg="StopContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" with timeout 2 (s)" Sep 4 23:51:01.850393 containerd[1526]: time="2025-09-04T23:51:01.850288129Z" level=info msg="Stop container \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" with signal terminated" Sep 4 23:51:01.850393 containerd[1526]: time="2025-09-04T23:51:01.850321111Z" level=info msg="Stop container \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" with signal terminated" Sep 4 23:51:01.860395 systemd-networkd[1423]: lxc_health: Link DOWN Sep 4 23:51:01.860403 systemd-networkd[1423]: lxc_health: Lost carrier Sep 4 23:51:01.879943 systemd[1]: cri-containerd-0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e.scope: Deactivated successfully. Sep 4 23:51:01.891136 systemd[1]: cri-containerd-db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd.scope: Deactivated successfully. Sep 4 23:51:01.891370 systemd[1]: cri-containerd-db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd.scope: Consumed 6.321s CPU time, 192.3M memory peak, 72M read from disk, 13.3M written to disk. Sep 4 23:51:01.902259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e-rootfs.mount: Deactivated successfully. Sep 4 23:51:01.911042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd-rootfs.mount: Deactivated successfully. Sep 4 23:51:01.915455 containerd[1526]: time="2025-09-04T23:51:01.914960485Z" level=info msg="shim disconnected" id=0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e namespace=k8s.io Sep 4 23:51:01.915455 containerd[1526]: time="2025-09-04T23:51:01.915287329Z" level=warning msg="cleaning up after shim disconnected" id=0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e namespace=k8s.io Sep 4 23:51:01.915455 containerd[1526]: time="2025-09-04T23:51:01.915303961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:01.917874 containerd[1526]: time="2025-09-04T23:51:01.917659825Z" level=info msg="shim disconnected" id=db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd namespace=k8s.io Sep 4 23:51:01.917874 containerd[1526]: time="2025-09-04T23:51:01.917832059Z" level=warning msg="cleaning up after shim disconnected" id=db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd namespace=k8s.io Sep 4 23:51:01.917874 containerd[1526]: time="2025-09-04T23:51:01.917842449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:01.937415 containerd[1526]: time="2025-09-04T23:51:01.937271968Z" level=info msg="StopContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" returns successfully" Sep 4 23:51:01.944385 containerd[1526]: time="2025-09-04T23:51:01.944199728Z" level=info msg="StopContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" returns successfully" Sep 4 23:51:01.946790 containerd[1526]: time="2025-09-04T23:51:01.946739468Z" level=info msg="StopPodSandbox for \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\"" Sep 4 23:51:01.946966 containerd[1526]: time="2025-09-04T23:51:01.946938843Z" level=info msg="StopPodSandbox for \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\"" Sep 4 23:51:01.948375 containerd[1526]: time="2025-09-04T23:51:01.948302393Z" level=info msg="Container to stop \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.948621 containerd[1526]: time="2025-09-04T23:51:01.948498532Z" level=info msg="Container to stop \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.948621 containerd[1526]: time="2025-09-04T23:51:01.948530742Z" level=info msg="Container to stop \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.948621 containerd[1526]: time="2025-09-04T23:51:01.948544438Z" level=info msg="Container to stop \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.948621 containerd[1526]: time="2025-09-04T23:51:01.948554406Z" level=info msg="Container to stop \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.948621 containerd[1526]: time="2025-09-04T23:51:01.948564756Z" level=info msg="Container to stop \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:01.951358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0-shm.mount: Deactivated successfully. Sep 4 23:51:01.951493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192-shm.mount: Deactivated successfully. Sep 4 23:51:01.960628 systemd[1]: cri-containerd-a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0.scope: Deactivated successfully. Sep 4 23:51:01.962679 systemd[1]: cri-containerd-dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192.scope: Deactivated successfully. Sep 4 23:51:01.987381 containerd[1526]: time="2025-09-04T23:51:01.987309540Z" level=info msg="shim disconnected" id=a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0 namespace=k8s.io Sep 4 23:51:01.987532 containerd[1526]: time="2025-09-04T23:51:01.987371686Z" level=warning msg="cleaning up after shim disconnected" id=a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0 namespace=k8s.io Sep 4 23:51:01.987532 containerd[1526]: time="2025-09-04T23:51:01.987403888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:01.988032 containerd[1526]: time="2025-09-04T23:51:01.987314149Z" level=info msg="shim disconnected" id=dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192 namespace=k8s.io Sep 4 23:51:01.988032 containerd[1526]: time="2025-09-04T23:51:01.987737946Z" level=warning msg="cleaning up after shim disconnected" id=dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192 namespace=k8s.io Sep 4 23:51:01.988032 containerd[1526]: time="2025-09-04T23:51:01.987745870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:02.001860 containerd[1526]: time="2025-09-04T23:51:02.001765329Z" level=info msg="TearDown network for sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" successfully" Sep 4 23:51:02.001860 containerd[1526]: time="2025-09-04T23:51:02.001796387Z" level=info msg="StopPodSandbox for \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" returns successfully" Sep 4 23:51:02.006219 containerd[1526]: time="2025-09-04T23:51:02.006192524Z" level=info msg="TearDown network for sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" successfully" Sep 4 23:51:02.006219 containerd[1526]: time="2025-09-04T23:51:02.006214886Z" level=info msg="StopPodSandbox for \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" returns successfully" Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060784 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b2kj\" (UniqueName: \"kubernetes.io/projected/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-kube-api-access-6b2kj\") pod \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\" (UID: \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\") " Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060824 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-kernel\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060838 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cni-path\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060855 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-cgroup\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060870 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-hostproc\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061781 kubelet[2648]: I0904 23:51:02.060890 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-config-path\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060905 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-hubble-tls\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060920 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-bpf-maps\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060933 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-xtables-lock\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060947 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj98t\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-kube-api-access-qj98t\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060958 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-run\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.061993 kubelet[2648]: I0904 23:51:02.060969 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-net\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.062181 kubelet[2648]: I0904 23:51:02.060982 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-etc-cni-netd\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.062181 kubelet[2648]: I0904 23:51:02.060992 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-lib-modules\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.062181 kubelet[2648]: I0904 23:51:02.061028 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c9e833a-5c7d-4c35-8178-81620790e118-clustermesh-secrets\") pod \"4c9e833a-5c7d-4c35-8178-81620790e118\" (UID: \"4c9e833a-5c7d-4c35-8178-81620790e118\") " Sep 4 23:51:02.062181 kubelet[2648]: I0904 23:51:02.061051 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-cilium-config-path\") pod \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\" (UID: \"c0195b7e-6bfc-4d23-8193-e077f68c3ea6\") " Sep 4 23:51:02.073880 kubelet[2648]: I0904 23:51:02.072183 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.073880 kubelet[2648]: I0904 23:51:02.073415 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.076292 kubelet[2648]: I0904 23:51:02.075980 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.076292 kubelet[2648]: I0904 23:51:02.076045 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.076292 kubelet[2648]: I0904 23:51:02.076067 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.076292 kubelet[2648]: I0904 23:51:02.076085 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.076716 kubelet[2648]: I0904 23:51:02.076452 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:51:02.076716 kubelet[2648]: I0904 23:51:02.076510 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-kube-api-access-qj98t" (OuterVolumeSpecName: "kube-api-access-qj98t") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "kube-api-access-qj98t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:51:02.078264 kubelet[2648]: I0904 23:51:02.078210 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9e833a-5c7d-4c35-8178-81620790e118-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:51:02.078264 kubelet[2648]: I0904 23:51:02.078257 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0195b7e-6bfc-4d23-8193-e077f68c3ea6" (UID: "c0195b7e-6bfc-4d23-8193-e077f68c3ea6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:51:02.078349 kubelet[2648]: I0904 23:51:02.078289 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.078349 kubelet[2648]: I0904 23:51:02.078306 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.078349 kubelet[2648]: I0904 23:51:02.078322 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.078349 kubelet[2648]: I0904 23:51:02.078339 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:51:02.078802 kubelet[2648]: I0904 23:51:02.078773 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-kube-api-access-6b2kj" (OuterVolumeSpecName: "kube-api-access-6b2kj") pod "c0195b7e-6bfc-4d23-8193-e077f68c3ea6" (UID: "c0195b7e-6bfc-4d23-8193-e077f68c3ea6"). InnerVolumeSpecName "kube-api-access-6b2kj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:51:02.080863 kubelet[2648]: I0904 23:51:02.080827 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c9e833a-5c7d-4c35-8178-81620790e118" (UID: "4c9e833a-5c7d-4c35-8178-81620790e118"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:51:02.161531 kubelet[2648]: I0904 23:51:02.161441 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6b2kj\" (UniqueName: \"kubernetes.io/projected/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-kube-api-access-6b2kj\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161531 kubelet[2648]: I0904 23:51:02.161507 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-kernel\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161531 kubelet[2648]: I0904 23:51:02.161529 2648 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cni-path\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161531 kubelet[2648]: I0904 23:51:02.161546 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-cgroup\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161561 2648 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-hostproc\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161576 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-config-path\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161587 2648 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-hubble-tls\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161607 2648 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-bpf-maps\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161622 2648 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-etc-cni-netd\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161635 2648 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-xtables-lock\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161686 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj98t\" (UniqueName: \"kubernetes.io/projected/4c9e833a-5c7d-4c35-8178-81620790e118-kube-api-access-qj98t\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.161938 kubelet[2648]: I0904 23:51:02.161703 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-cilium-run\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.162262 kubelet[2648]: I0904 23:51:02.161720 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-host-proc-sys-net\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.162262 kubelet[2648]: I0904 23:51:02.161732 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0195b7e-6bfc-4d23-8193-e077f68c3ea6-cilium-config-path\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.162262 kubelet[2648]: I0904 23:51:02.161748 2648 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c9e833a-5c7d-4c35-8178-81620790e118-lib-modules\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.162262 kubelet[2648]: I0904 23:51:02.161765 2648 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c9e833a-5c7d-4c35-8178-81620790e118-clustermesh-secrets\") on node \"ci-4230-2-2-n-de0727ed16\" DevicePath \"\"" Sep 4 23:51:02.661870 kubelet[2648]: E0904 23:51:02.649993 2648 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:51:02.789462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0-rootfs.mount: Deactivated successfully. Sep 4 23:51:02.789590 systemd[1]: var-lib-kubelet-pods-c0195b7e\x2d6bfc\x2d4d23\x2d8193\x2de077f68c3ea6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6b2kj.mount: Deactivated successfully. Sep 4 23:51:02.789667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192-rootfs.mount: Deactivated successfully. Sep 4 23:51:02.789731 systemd[1]: var-lib-kubelet-pods-4c9e833a\x2d5c7d\x2d4c35\x2d8178\x2d81620790e118-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqj98t.mount: Deactivated successfully. Sep 4 23:51:02.789794 systemd[1]: var-lib-kubelet-pods-4c9e833a\x2d5c7d\x2d4c35\x2d8178\x2d81620790e118-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:51:02.789851 systemd[1]: var-lib-kubelet-pods-4c9e833a\x2d5c7d\x2d4c35\x2d8178\x2d81620790e118-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:51:02.892125 kubelet[2648]: I0904 23:51:02.890189 2648 scope.go:117] "RemoveContainer" containerID="0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e" Sep 4 23:51:02.896584 systemd[1]: Removed slice kubepods-besteffort-podc0195b7e_6bfc_4d23_8193_e077f68c3ea6.slice - libcontainer container kubepods-besteffort-podc0195b7e_6bfc_4d23_8193_e077f68c3ea6.slice. Sep 4 23:51:02.899291 containerd[1526]: time="2025-09-04T23:51:02.899093153Z" level=info msg="RemoveContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\"" Sep 4 23:51:02.902027 systemd[1]: Removed slice kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice - libcontainer container kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice. Sep 4 23:51:02.902451 systemd[1]: kubepods-burstable-pod4c9e833a_5c7d_4c35_8178_81620790e118.slice: Consumed 6.401s CPU time, 192.6M memory peak, 72M read from disk, 13.3M written to disk. Sep 4 23:51:02.904786 containerd[1526]: time="2025-09-04T23:51:02.904704772Z" level=info msg="RemoveContainer for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" returns successfully" Sep 4 23:51:02.907946 kubelet[2648]: I0904 23:51:02.907928 2648 scope.go:117] "RemoveContainer" containerID="0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e" Sep 4 23:51:02.910602 containerd[1526]: time="2025-09-04T23:51:02.910554948Z" level=error msg="ContainerStatus for \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\": not found" Sep 4 23:51:02.916715 kubelet[2648]: E0904 23:51:02.916614 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\": not found" containerID="0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e" Sep 4 23:51:02.933219 kubelet[2648]: I0904 23:51:02.916664 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e"} err="failed to get container status \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d60ce566f5265f9bfed42b1442a62e476e14acd630390a9e2a16e26ec52124e\": not found" Sep 4 23:51:02.933367 kubelet[2648]: I0904 23:51:02.933336 2648 scope.go:117] "RemoveContainer" containerID="db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd" Sep 4 23:51:02.935610 containerd[1526]: time="2025-09-04T23:51:02.935283288Z" level=info msg="RemoveContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\"" Sep 4 23:51:02.938385 containerd[1526]: time="2025-09-04T23:51:02.938339488Z" level=info msg="RemoveContainer for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" returns successfully" Sep 4 23:51:02.938738 kubelet[2648]: I0904 23:51:02.938612 2648 scope.go:117] "RemoveContainer" containerID="78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826" Sep 4 23:51:02.939583 containerd[1526]: time="2025-09-04T23:51:02.939551534Z" level=info msg="RemoveContainer for \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\"" Sep 4 23:51:02.942182 containerd[1526]: time="2025-09-04T23:51:02.942140847Z" level=info msg="RemoveContainer for \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\" returns successfully" Sep 4 23:51:02.942391 kubelet[2648]: I0904 23:51:02.942304 2648 scope.go:117] "RemoveContainer" containerID="f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5" Sep 4 23:51:02.943270 containerd[1526]: time="2025-09-04T23:51:02.943217158Z" level=info msg="RemoveContainer for \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\"" Sep 4 23:51:02.946214 containerd[1526]: time="2025-09-04T23:51:02.946179470Z" level=info msg="RemoveContainer for \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\" returns successfully" Sep 4 23:51:02.946336 kubelet[2648]: I0904 23:51:02.946308 2648 scope.go:117] "RemoveContainer" containerID="ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b" Sep 4 23:51:02.947290 containerd[1526]: time="2025-09-04T23:51:02.947239151Z" level=info msg="RemoveContainer for \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\"" Sep 4 23:51:02.949757 containerd[1526]: time="2025-09-04T23:51:02.949728917Z" level=info msg="RemoveContainer for \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\" returns successfully" Sep 4 23:51:02.949862 kubelet[2648]: I0904 23:51:02.949840 2648 scope.go:117] "RemoveContainer" containerID="39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc" Sep 4 23:51:02.950747 containerd[1526]: time="2025-09-04T23:51:02.950695422Z" level=info msg="RemoveContainer for \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\"" Sep 4 23:51:02.953417 containerd[1526]: time="2025-09-04T23:51:02.953390032Z" level=info msg="RemoveContainer for \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\" returns successfully" Sep 4 23:51:02.953607 kubelet[2648]: I0904 23:51:02.953522 2648 scope.go:117] "RemoveContainer" containerID="db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd" Sep 4 23:51:02.953780 containerd[1526]: time="2025-09-04T23:51:02.953745189Z" level=error msg="ContainerStatus for \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\": not found" Sep 4 23:51:02.953920 kubelet[2648]: E0904 23:51:02.953866 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\": not found" containerID="db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd" Sep 4 23:51:02.953920 kubelet[2648]: I0904 23:51:02.953907 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd"} err="failed to get container status \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"db0abc68a1fea92de20c733145f6cf1b1b273c2b5fd6fdc83941838b26f064cd\": not found" Sep 4 23:51:02.954057 kubelet[2648]: I0904 23:51:02.953927 2648 scope.go:117] "RemoveContainer" containerID="78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826" Sep 4 23:51:02.954319 containerd[1526]: time="2025-09-04T23:51:02.954082713Z" level=error msg="ContainerStatus for \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\": not found" Sep 4 23:51:02.954362 kubelet[2648]: E0904 23:51:02.954219 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\": not found" containerID="78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826" Sep 4 23:51:02.954362 kubelet[2648]: I0904 23:51:02.954242 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826"} err="failed to get container status \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\": rpc error: code = NotFound desc = an error occurred when try to find container \"78a691c4298ed8aed6313b0304c139e109257000ed3fb5be8826208746cfc826\": not found" Sep 4 23:51:02.954362 kubelet[2648]: I0904 23:51:02.954255 2648 scope.go:117] "RemoveContainer" containerID="f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5" Sep 4 23:51:02.954475 containerd[1526]: time="2025-09-04T23:51:02.954429915Z" level=error msg="ContainerStatus for \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\": not found" Sep 4 23:51:02.954613 kubelet[2648]: E0904 23:51:02.954580 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\": not found" containerID="f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5" Sep 4 23:51:02.954654 kubelet[2648]: I0904 23:51:02.954601 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5"} err="failed to get container status \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8661c4ca3976833e41d98a2190971134126b81056b9d49af33fee98db04ffb5\": not found" Sep 4 23:51:02.954654 kubelet[2648]: I0904 23:51:02.954628 2648 scope.go:117] "RemoveContainer" containerID="ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b" Sep 4 23:51:02.954822 containerd[1526]: time="2025-09-04T23:51:02.954783198Z" level=error msg="ContainerStatus for \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\": not found" Sep 4 23:51:02.954971 kubelet[2648]: E0904 23:51:02.954930 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\": not found" containerID="ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b" Sep 4 23:51:02.954971 kubelet[2648]: I0904 23:51:02.954957 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b"} err="failed to get container status \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad6061b034d2597d9f0f3bbfa3367737f31d5680709854b25d332b55c00ddc1b\": not found" Sep 4 23:51:02.954971 kubelet[2648]: I0904 23:51:02.954970 2648 scope.go:117] "RemoveContainer" containerID="39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc" Sep 4 23:51:02.955141 containerd[1526]: time="2025-09-04T23:51:02.955122496Z" level=error msg="ContainerStatus for \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\": not found" Sep 4 23:51:02.955235 kubelet[2648]: E0904 23:51:02.955221 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\": not found" containerID="39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc" Sep 4 23:51:02.955320 kubelet[2648]: I0904 23:51:02.955239 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc"} err="failed to get container status \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"39eb2d04f1ca19640bf5216c052a53a2970b51fc6170a54624b13b9659a5bcfc\": not found" Sep 4 23:51:03.550729 kubelet[2648]: I0904 23:51:03.550687 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9e833a-5c7d-4c35-8178-81620790e118" path="/var/lib/kubelet/pods/4c9e833a-5c7d-4c35-8178-81620790e118/volumes" Sep 4 23:51:03.551200 kubelet[2648]: I0904 23:51:03.551183 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0195b7e-6bfc-4d23-8193-e077f68c3ea6" path="/var/lib/kubelet/pods/c0195b7e-6bfc-4d23-8193-e077f68c3ea6/volumes" Sep 4 23:51:03.911432 sshd[4233]: Connection closed by 139.178.68.195 port 41588 Sep 4 23:51:03.912238 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:03.915099 systemd-logind[1502]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:51:03.915744 systemd[1]: sshd@20-46.62.204.39:22-139.178.68.195:41588.service: Deactivated successfully. Sep 4 23:51:03.917541 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:51:03.918756 systemd-logind[1502]: Removed session 21. Sep 4 23:51:04.069234 systemd[1]: Started sshd@21-46.62.204.39:22-139.178.68.195:48654.service - OpenSSH per-connection server daemon (139.178.68.195:48654). Sep 4 23:51:05.067630 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 48654 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:51:05.069248 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:05.075873 systemd-logind[1502]: New session 22 of user core. Sep 4 23:51:05.082189 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:51:05.975312 kubelet[2648]: I0904 23:51:05.975195 2648 memory_manager.go:355] "RemoveStaleState removing state" podUID="4c9e833a-5c7d-4c35-8178-81620790e118" containerName="cilium-agent" Sep 4 23:51:05.975312 kubelet[2648]: I0904 23:51:05.975262 2648 memory_manager.go:355] "RemoveStaleState removing state" podUID="c0195b7e-6bfc-4d23-8193-e077f68c3ea6" containerName="cilium-operator" Sep 4 23:51:06.047146 systemd[1]: Created slice kubepods-burstable-pod738ad4db_50e2_4ce0_b5f8_cc9a692097c4.slice - libcontainer container kubepods-burstable-pod738ad4db_50e2_4ce0_b5f8_cc9a692097c4.slice. Sep 4 23:51:06.089875 kubelet[2648]: I0904 23:51:06.089761 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-lib-modules\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091268 kubelet[2648]: I0904 23:51:06.091226 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-hostproc\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091350 kubelet[2648]: I0904 23:51:06.091275 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pzw\" (UniqueName: \"kubernetes.io/projected/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-kube-api-access-t5pzw\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091350 kubelet[2648]: I0904 23:51:06.091307 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-cilium-cgroup\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091350 kubelet[2648]: I0904 23:51:06.091329 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-cni-path\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091441 kubelet[2648]: I0904 23:51:06.091350 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-etc-cni-netd\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091441 kubelet[2648]: I0904 23:51:06.091372 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-host-proc-sys-kernel\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091441 kubelet[2648]: I0904 23:51:06.091397 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-cilium-run\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091441 kubelet[2648]: I0904 23:51:06.091417 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-cilium-ipsec-secrets\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091541 kubelet[2648]: I0904 23:51:06.091439 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-hubble-tls\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091541 kubelet[2648]: I0904 23:51:06.091463 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-clustermesh-secrets\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091541 kubelet[2648]: I0904 23:51:06.091487 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-host-proc-sys-net\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091541 kubelet[2648]: I0904 23:51:06.091529 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-bpf-maps\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091613 kubelet[2648]: I0904 23:51:06.091551 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-cilium-config-path\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.091613 kubelet[2648]: I0904 23:51:06.091572 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/738ad4db-50e2-4ce0-b5f8-cc9a692097c4-xtables-lock\") pod \"cilium-54rnm\" (UID: \"738ad4db-50e2-4ce0-b5f8-cc9a692097c4\") " pod="kube-system/cilium-54rnm" Sep 4 23:51:06.107699 sshd[4394]: Connection closed by 139.178.68.195 port 48654 Sep 4 23:51:06.108315 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:06.112806 systemd[1]: sshd@21-46.62.204.39:22-139.178.68.195:48654.service: Deactivated successfully. Sep 4 23:51:06.115306 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:51:06.116306 systemd-logind[1502]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:51:06.117612 systemd-logind[1502]: Removed session 22. Sep 4 23:51:06.281290 systemd[1]: Started sshd@22-46.62.204.39:22-139.178.68.195:48668.service - OpenSSH per-connection server daemon (139.178.68.195:48668). Sep 4 23:51:06.352125 containerd[1526]: time="2025-09-04T23:51:06.352072137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54rnm,Uid:738ad4db-50e2-4ce0-b5f8-cc9a692097c4,Namespace:kube-system,Attempt:0,}" Sep 4 23:51:06.375049 containerd[1526]: time="2025-09-04T23:51:06.374942854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:06.375348 containerd[1526]: time="2025-09-04T23:51:06.375207091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:06.375348 containerd[1526]: time="2025-09-04T23:51:06.375248348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:06.375974 containerd[1526]: time="2025-09-04T23:51:06.375914920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:06.398242 systemd[1]: Started cri-containerd-ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4.scope - libcontainer container ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4. Sep 4 23:51:06.420087 containerd[1526]: time="2025-09-04T23:51:06.419995495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54rnm,Uid:738ad4db-50e2-4ce0-b5f8-cc9a692097c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\"" Sep 4 23:51:06.423996 containerd[1526]: time="2025-09-04T23:51:06.423948849Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:51:06.433150 containerd[1526]: time="2025-09-04T23:51:06.433099452Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7\"" Sep 4 23:51:06.434903 containerd[1526]: time="2025-09-04T23:51:06.434872872Z" level=info msg="StartContainer for \"2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7\"" Sep 4 23:51:06.461250 systemd[1]: Started cri-containerd-2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7.scope - libcontainer container 2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7. Sep 4 23:51:06.485397 containerd[1526]: time="2025-09-04T23:51:06.485332767Z" level=info msg="StartContainer for \"2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7\" returns successfully" Sep 4 23:51:06.497784 systemd[1]: cri-containerd-2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7.scope: Deactivated successfully. Sep 4 23:51:06.498488 systemd[1]: cri-containerd-2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7.scope: Consumed 21ms CPU time, 9.7M memory peak, 3M read from disk. Sep 4 23:51:06.541341 containerd[1526]: time="2025-09-04T23:51:06.540782030Z" level=info msg="shim disconnected" id=2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7 namespace=k8s.io Sep 4 23:51:06.541341 containerd[1526]: time="2025-09-04T23:51:06.540854237Z" level=warning msg="cleaning up after shim disconnected" id=2e3e85a0d1461a45d8b2d1f6607269aa20aaa5877edce6bb7b5ca85f46ebf3d7 namespace=k8s.io Sep 4 23:51:06.541341 containerd[1526]: time="2025-09-04T23:51:06.540864176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:06.910791 containerd[1526]: time="2025-09-04T23:51:06.910591127Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:51:06.922497 containerd[1526]: time="2025-09-04T23:51:06.922439055Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d\"" Sep 4 23:51:06.923904 containerd[1526]: time="2025-09-04T23:51:06.923864141Z" level=info msg="StartContainer for \"e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d\"" Sep 4 23:51:06.954234 systemd[1]: Started cri-containerd-e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d.scope - libcontainer container e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d. Sep 4 23:51:06.980300 containerd[1526]: time="2025-09-04T23:51:06.980017377Z" level=info msg="StartContainer for \"e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d\" returns successfully" Sep 4 23:51:06.986378 systemd[1]: cri-containerd-e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d.scope: Deactivated successfully. Sep 4 23:51:06.986842 systemd[1]: cri-containerd-e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d.scope: Consumed 17ms CPU time, 7.3M memory peak, 2.1M read from disk. Sep 4 23:51:07.009409 containerd[1526]: time="2025-09-04T23:51:07.009303170Z" level=info msg="shim disconnected" id=e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d namespace=k8s.io Sep 4 23:51:07.009805 containerd[1526]: time="2025-09-04T23:51:07.009388901Z" level=warning msg="cleaning up after shim disconnected" id=e4c0424c9bbda9506ee705286466a22006267ed9dc7cfd25d0ca8870796d085d namespace=k8s.io Sep 4 23:51:07.009805 containerd[1526]: time="2025-09-04T23:51:07.009430529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:07.265944 sshd[4410]: Accepted publickey for core from 139.178.68.195 port 48668 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:51:07.267183 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:07.271670 systemd-logind[1502]: New session 23 of user core. Sep 4 23:51:07.274138 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:51:07.663103 kubelet[2648]: E0904 23:51:07.662998 2648 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:51:07.912383 containerd[1526]: time="2025-09-04T23:51:07.912338874Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:51:07.931121 containerd[1526]: time="2025-09-04T23:51:07.930490569Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5\"" Sep 4 23:51:07.933444 containerd[1526]: time="2025-09-04T23:51:07.933415461Z" level=info msg="StartContainer for \"79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5\"" Sep 4 23:51:07.947677 sshd[4575]: Connection closed by 139.178.68.195 port 48668 Sep 4 23:51:07.949359 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:07.960542 systemd[1]: sshd@22-46.62.204.39:22-139.178.68.195:48668.service: Deactivated successfully. Sep 4 23:51:07.962841 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:51:07.964834 systemd-logind[1502]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:51:07.966573 systemd-logind[1502]: Removed session 23. Sep 4 23:51:07.979332 systemd[1]: Started cri-containerd-79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5.scope - libcontainer container 79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5. Sep 4 23:51:08.012129 containerd[1526]: time="2025-09-04T23:51:08.011911930Z" level=info msg="StartContainer for \"79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5\" returns successfully" Sep 4 23:51:08.017753 systemd[1]: cri-containerd-79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5.scope: Deactivated successfully. Sep 4 23:51:08.046650 containerd[1526]: time="2025-09-04T23:51:08.046584736Z" level=info msg="shim disconnected" id=79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5 namespace=k8s.io Sep 4 23:51:08.046843 containerd[1526]: time="2025-09-04T23:51:08.046796153Z" level=warning msg="cleaning up after shim disconnected" id=79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5 namespace=k8s.io Sep 4 23:51:08.046843 containerd[1526]: time="2025-09-04T23:51:08.046812664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:08.121338 systemd[1]: Started sshd@23-46.62.204.39:22-139.178.68.195:48672.service - OpenSSH per-connection server daemon (139.178.68.195:48672). Sep 4 23:51:08.200737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79f12a482fbe10f4cf1c4db9c88579520e03fe4efb2019a54a5c663a47f31cc5-rootfs.mount: Deactivated successfully. Sep 4 23:51:08.916579 containerd[1526]: time="2025-09-04T23:51:08.916536913Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:51:08.932267 containerd[1526]: time="2025-09-04T23:51:08.931740953Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb\"" Sep 4 23:51:08.935869 containerd[1526]: time="2025-09-04T23:51:08.933140710Z" level=info msg="StartContainer for \"8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb\"" Sep 4 23:51:08.934823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266196151.mount: Deactivated successfully. Sep 4 23:51:08.977314 systemd[1]: Started cri-containerd-8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb.scope - libcontainer container 8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb. Sep 4 23:51:09.007596 containerd[1526]: time="2025-09-04T23:51:09.007448141Z" level=info msg="StartContainer for \"8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb\" returns successfully" Sep 4 23:51:09.007547 systemd[1]: cri-containerd-8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb.scope: Deactivated successfully. Sep 4 23:51:09.032454 containerd[1526]: time="2025-09-04T23:51:09.032365328Z" level=info msg="shim disconnected" id=8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb namespace=k8s.io Sep 4 23:51:09.032454 containerd[1526]: time="2025-09-04T23:51:09.032442223Z" level=warning msg="cleaning up after shim disconnected" id=8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb namespace=k8s.io Sep 4 23:51:09.032454 containerd[1526]: time="2025-09-04T23:51:09.032450508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:09.107981 sshd[4636]: Accepted publickey for core from 139.178.68.195 port 48672 ssh2: RSA SHA256:bnND9AWytOO0v3AspVqA+MzL3lsiru+ZOzulu4RtUXQ Sep 4 23:51:09.109447 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:09.114472 systemd-logind[1502]: New session 24 of user core. Sep 4 23:51:09.123150 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:51:09.200727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f157b75af5f56755658b53f513aa7dc177a514540cb6cff0a43430af2bc6cbb-rootfs.mount: Deactivated successfully. Sep 4 23:51:09.922081 containerd[1526]: time="2025-09-04T23:51:09.921339671Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:51:09.945373 containerd[1526]: time="2025-09-04T23:51:09.945295252Z" level=info msg="CreateContainer within sandbox \"ccd56c8ff244b3c2c68bd6ae87351b95fab70c8396ba1d8a0abbac37a7954cf4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188\"" Sep 4 23:51:09.947191 containerd[1526]: time="2025-09-04T23:51:09.946476309Z" level=info msg="StartContainer for \"bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188\"" Sep 4 23:51:09.973194 systemd[1]: Started cri-containerd-bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188.scope - libcontainer container bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188. Sep 4 23:51:10.001489 containerd[1526]: time="2025-09-04T23:51:10.001414798Z" level=info msg="StartContainer for \"bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188\" returns successfully" Sep 4 23:51:10.454482 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:51:10.887907 kubelet[2648]: I0904 23:51:10.887773 2648 setters.go:602] "Node became not ready" node="ci-4230-2-2-n-de0727ed16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:51:10Z","lastTransitionTime":"2025-09-04T23:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:51:10.941869 kubelet[2648]: I0904 23:51:10.941791 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-54rnm" podStartSLOduration=5.941768863 podStartE2EDuration="5.941768863s" podCreationTimestamp="2025-09-04 23:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:51:10.939398382 +0000 UTC m=+153.511426090" watchObservedRunningTime="2025-09-04 23:51:10.941768863 +0000 UTC m=+153.513796560" Sep 4 23:51:13.131827 systemd-networkd[1423]: lxc_health: Link UP Sep 4 23:51:13.136116 systemd-networkd[1423]: lxc_health: Gained carrier Sep 4 23:51:14.598284 systemd-networkd[1423]: lxc_health: Gained IPv6LL Sep 4 23:51:18.481384 systemd[1]: run-containerd-runc-k8s.io-bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188-runc.7SE6JT.mount: Deactivated successfully. Sep 4 23:51:20.657377 systemd[1]: run-containerd-runc-k8s.io-bc2090d5f63145e5f81167578322be7173c05d2b99eaab5f0b49b4cea56e1188-runc.OaI8Vf.mount: Deactivated successfully. Sep 4 23:51:20.889586 sshd[4692]: Connection closed by 139.178.68.195 port 48672 Sep 4 23:51:20.890578 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:20.895066 systemd[1]: sshd@23-46.62.204.39:22-139.178.68.195:48672.service: Deactivated successfully. Sep 4 23:51:20.899195 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:51:20.900510 systemd-logind[1502]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:51:20.901605 systemd-logind[1502]: Removed session 24. Sep 4 23:51:37.554726 containerd[1526]: time="2025-09-04T23:51:37.554645845Z" level=info msg="StopPodSandbox for \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\"" Sep 4 23:51:37.555129 containerd[1526]: time="2025-09-04T23:51:37.554767364Z" level=info msg="TearDown network for sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" successfully" Sep 4 23:51:37.555129 containerd[1526]: time="2025-09-04T23:51:37.554783585Z" level=info msg="StopPodSandbox for \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" returns successfully" Sep 4 23:51:37.555320 containerd[1526]: time="2025-09-04T23:51:37.555213842Z" level=info msg="RemovePodSandbox for \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\"" Sep 4 23:51:37.555320 containerd[1526]: time="2025-09-04T23:51:37.555251403Z" level=info msg="Forcibly stopping sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\"" Sep 4 23:51:37.555402 containerd[1526]: time="2025-09-04T23:51:37.555306516Z" level=info msg="TearDown network for sandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" successfully" Sep 4 23:51:37.562778 containerd[1526]: time="2025-09-04T23:51:37.562721765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:37.562895 containerd[1526]: time="2025-09-04T23:51:37.562800743Z" level=info msg="RemovePodSandbox \"a7c7fb236cd371e7229c83b53ee07ab279f0bf5ee5e26c1c2dc3663da36663f0\" returns successfully" Sep 4 23:51:37.563393 containerd[1526]: time="2025-09-04T23:51:37.563268831Z" level=info msg="StopPodSandbox for \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\"" Sep 4 23:51:37.563393 containerd[1526]: time="2025-09-04T23:51:37.563333843Z" level=info msg="TearDown network for sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" successfully" Sep 4 23:51:37.563393 containerd[1526]: time="2025-09-04T23:51:37.563344564Z" level=info msg="StopPodSandbox for \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" returns successfully" Sep 4 23:51:37.564872 containerd[1526]: time="2025-09-04T23:51:37.563678851Z" level=info msg="RemovePodSandbox for \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\"" Sep 4 23:51:37.564872 containerd[1526]: time="2025-09-04T23:51:37.563724577Z" level=info msg="Forcibly stopping sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\"" Sep 4 23:51:37.564872 containerd[1526]: time="2025-09-04T23:51:37.563765614Z" level=info msg="TearDown network for sandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" successfully" Sep 4 23:51:37.568868 containerd[1526]: time="2025-09-04T23:51:37.568833858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:37.568939 containerd[1526]: time="2025-09-04T23:51:37.568887769Z" level=info msg="RemovePodSandbox \"dd061e4255997a7cca457b80cd958a6cefe67ab55e651d5bfbe4685dde69f192\" returns successfully" Sep 4 23:51:54.161388 kubelet[2648]: E0904 23:51:54.161115 2648 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53630->10.0.0.2:2379: read: connection timed out" Sep 4 23:51:54.161161 systemd[1]: cri-containerd-896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030.scope: Deactivated successfully. Sep 4 23:51:54.163164 systemd[1]: cri-containerd-896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030.scope: Consumed 1.883s CPU time, 31.1M memory peak, 12.9M read from disk. Sep 4 23:51:54.172812 systemd[1]: cri-containerd-824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f.scope: Deactivated successfully. Sep 4 23:51:54.173044 systemd[1]: cri-containerd-824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f.scope: Consumed 3.397s CPU time, 71.3M memory peak, 24.9M read from disk. Sep 4 23:51:54.190234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030-rootfs.mount: Deactivated successfully. Sep 4 23:51:54.197894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f-rootfs.mount: Deactivated successfully. Sep 4 23:51:54.210418 containerd[1526]: time="2025-09-04T23:51:54.210339032Z" level=info msg="shim disconnected" id=896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030 namespace=k8s.io Sep 4 23:51:54.210418 containerd[1526]: time="2025-09-04T23:51:54.210388665Z" level=warning msg="cleaning up after shim disconnected" id=896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030 namespace=k8s.io Sep 4 23:51:54.210418 containerd[1526]: time="2025-09-04T23:51:54.210397342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:54.211127 containerd[1526]: time="2025-09-04T23:51:54.210579424Z" level=info msg="shim disconnected" id=824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f namespace=k8s.io Sep 4 23:51:54.211127 containerd[1526]: time="2025-09-04T23:51:54.211031492Z" level=warning msg="cleaning up after shim disconnected" id=824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f namespace=k8s.io Sep 4 23:51:54.211127 containerd[1526]: time="2025-09-04T23:51:54.211043945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:55.008843 kubelet[2648]: I0904 23:51:55.008800 2648 scope.go:117] "RemoveContainer" containerID="824041afd08a458be4d02bab8897c4da815c07846010005ffd85aad11c0dd55f" Sep 4 23:51:55.011230 kubelet[2648]: I0904 23:51:55.011087 2648 scope.go:117] "RemoveContainer" containerID="896ada36bdae8a79273f035e3978150b7fe5b976671d5e79d085df4977973030" Sep 4 23:51:55.014414 containerd[1526]: time="2025-09-04T23:51:55.014366488Z" level=info msg="CreateContainer within sandbox \"6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:51:55.014832 containerd[1526]: time="2025-09-04T23:51:55.014383881Z" level=info msg="CreateContainer within sandbox \"aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 23:51:55.031443 containerd[1526]: time="2025-09-04T23:51:55.031192989Z" level=info msg="CreateContainer within sandbox \"aeebfd954ef40b0e33ef736923c151b738ec9fc14f661760831a3feacf75140e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e241cf470403edd4c6b7a004413dc7226ea76a887b6bd6b4449eb34197188ebb\"" Sep 4 23:51:55.031799 containerd[1526]: time="2025-09-04T23:51:55.031734224Z" level=info msg="StartContainer for \"e241cf470403edd4c6b7a004413dc7226ea76a887b6bd6b4449eb34197188ebb\"" Sep 4 23:51:55.033328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260279794.mount: Deactivated successfully. Sep 4 23:51:55.033691 containerd[1526]: time="2025-09-04T23:51:55.033662613Z" level=info msg="CreateContainer within sandbox \"6126154b2d160e970fc0778a51ebad2ab805a86b5be35b61a37ce43039b837f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cdf61edeb986e7d7591fb0627e67dc7b97c76f7ab6c6fa53d54f1e5577778a64\"" Sep 4 23:51:55.034021 containerd[1526]: time="2025-09-04T23:51:55.033987474Z" level=info msg="StartContainer for \"cdf61edeb986e7d7591fb0627e67dc7b97c76f7ab6c6fa53d54f1e5577778a64\"" Sep 4 23:51:55.063076 systemd[1]: Started cri-containerd-e241cf470403edd4c6b7a004413dc7226ea76a887b6bd6b4449eb34197188ebb.scope - libcontainer container e241cf470403edd4c6b7a004413dc7226ea76a887b6bd6b4449eb34197188ebb. Sep 4 23:51:55.070134 systemd[1]: Started cri-containerd-cdf61edeb986e7d7591fb0627e67dc7b97c76f7ab6c6fa53d54f1e5577778a64.scope - libcontainer container cdf61edeb986e7d7591fb0627e67dc7b97c76f7ab6c6fa53d54f1e5577778a64. Sep 4 23:51:55.104605 containerd[1526]: time="2025-09-04T23:51:55.104543053Z" level=info msg="StartContainer for \"e241cf470403edd4c6b7a004413dc7226ea76a887b6bd6b4449eb34197188ebb\" returns successfully" Sep 4 23:51:55.110968 containerd[1526]: time="2025-09-04T23:51:55.110941531Z" level=info msg="StartContainer for \"cdf61edeb986e7d7591fb0627e67dc7b97c76f7ab6c6fa53d54f1e5577778a64\" returns successfully" Sep 4 23:51:59.148139 kubelet[2648]: E0904 23:51:59.147029 2648 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53444->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-2-n-de0727ed16.186239648eeb25f5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-2-n-de0727ed16,UID:8cbee3914a56c0915a68f1dac9656bc2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-n-de0727ed16,},FirstTimestamp:2025-09-04 23:51:48.685202933 +0000 UTC m=+191.257230631,LastTimestamp:2025-09-04 23:51:48.685202933 +0000 UTC m=+191.257230631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-n-de0727ed16,}"