Mar 14 00:15:26.157225 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:15:26.157265 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:15:26.157290 kernel: BIOS-provided physical RAM map: Mar 14 00:15:26.157333 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:15:26.157346 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Mar 14 00:15:26.157359 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 14 00:15:26.157369 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 14 00:15:26.157378 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Mar 14 00:15:26.157386 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Mar 14 00:15:26.157395 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Mar 14 00:15:26.157404 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 14 00:15:26.157419 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 14 00:15:26.157427 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 14 00:15:26.157436 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 14 00:15:26.157447 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 14 00:15:26.157456 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:15:26.157470 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 14 00:15:26.157479 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 14 00:15:26.157488 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:15:26.157497 kernel: NX (Execute Disable) protection: active Mar 14 00:15:26.157506 kernel: APIC: Static calls initialized Mar 14 00:15:26.157515 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:15:26.157525 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 Mar 14 00:15:26.157534 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 14 00:15:26.157562 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 14 00:15:26.157572 kernel: SMBIOS 3.0.0 present. Mar 14 00:15:26.157581 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 14 00:15:26.157590 kernel: Hypervisor detected: KVM Mar 14 00:15:26.157604 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:15:26.157614 kernel: kvm-clock: using sched offset of 12609509310 cycles Mar 14 00:15:26.157623 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:15:26.157639 kernel: tsc: Detected 2399.998 MHz processor Mar 14 00:15:26.157649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:15:26.157659 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:15:26.157668 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Mar 14 00:15:26.157678 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:15:26.157687 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:15:26.157702 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 14 00:15:26.157713 kernel: Using GB pages for direct mapping Mar 14 00:15:26.157723 kernel: Secure boot disabled Mar 14 00:15:26.157738 kernel: ACPI: Early table checksum verification disabled Mar 14 00:15:26.157748 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 14 00:15:26.157758 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:15:26.157768 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157782 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157792 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 14 00:15:26.157802 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157812 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157822 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157831 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:15:26.157841 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:15:26.157856 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Mar 14 00:15:26.157866 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Mar 14 00:15:26.157875 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 14 00:15:26.157885 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Mar 14 00:15:26.157895 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Mar 14 00:15:26.157905 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Mar 14 00:15:26.157915 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Mar 14 00:15:26.157924 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Mar 14 00:15:26.157934 kernel: No NUMA configuration found Mar 14 00:15:26.157949 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Mar 14 00:15:26.157959 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Mar 14 00:15:26.157969 kernel: Zone ranges: Mar 14 00:15:26.157979 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:15:26.157989 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:15:26.157999 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:15:26.158008 kernel: Movable zone start for each node Mar 14 00:15:26.158018 kernel: Early memory node ranges Mar 14 00:15:26.158028 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:15:26.158038 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Mar 14 00:15:26.158053 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Mar 14 00:15:26.158063 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Mar 14 00:15:26.158073 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:15:26.158083 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Mar 14 00:15:26.158093 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:15:26.158103 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:15:26.158113 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 14 00:15:26.158122 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 14 00:15:26.158132 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Mar 14 00:15:26.158147 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:15:26.158156 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:15:26.158166 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:15:26.158176 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:15:26.158186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:15:26.158196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:15:26.158206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:15:26.158216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:15:26.158226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:15:26.158240 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:15:26.158250 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:15:26.158260 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:15:26.158270 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:15:26.158279 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 14 00:15:26.158289 kernel: Booting paravirtualized kernel on KVM Mar 14 00:15:26.158299 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:15:26.164480 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:15:26.164486 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:15:26.164496 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:15:26.164501 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:15:26.164506 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 14 00:15:26.164513 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:15:26.164518 kernel: random: crng init done Mar 14 00:15:26.164523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:15:26.164528 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:15:26.164533 kernel: Fallback order for Node 0: 0 Mar 14 00:15:26.164551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Mar 14 00:15:26.164556 kernel: Policy zone: Normal Mar 14 00:15:26.164561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:15:26.164566 kernel: software IO TLB: area num 2. Mar 14 00:15:26.164571 kernel: Memory: 3827772K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 263192K reserved, 0K cma-reserved) Mar 14 00:15:26.164576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:15:26.164581 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:15:26.164586 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:15:26.164591 kernel: Dynamic Preempt: voluntary Mar 14 00:15:26.164599 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:15:26.164608 kernel: rcu: RCU event tracing is enabled. Mar 14 00:15:26.164614 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:15:26.164619 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:15:26.164636 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:15:26.164643 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:15:26.164648 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:15:26.164654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:15:26.164659 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:15:26.164664 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:15:26.164669 kernel: Console: colour dummy device 80x25 Mar 14 00:15:26.164674 kernel: printk: console [tty0] enabled Mar 14 00:15:26.164680 kernel: printk: console [ttyS0] enabled Mar 14 00:15:26.164687 kernel: ACPI: Core revision 20230628 Mar 14 00:15:26.164693 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:15:26.164698 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:15:26.164703 kernel: x2apic enabled Mar 14 00:15:26.164709 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:15:26.164717 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:15:26.164722 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:15:26.164730 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Mar 14 00:15:26.164737 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:15:26.164744 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:15:26.164752 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:15:26.164757 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:15:26.164762 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 14 00:15:26.164769 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:15:26.164775 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:15:26.164780 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:15:26.164785 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Mar 14 00:15:26.164790 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:15:26.164795 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:15:26.164800 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:15:26.164805 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:15:26.164810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:15:26.164818 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:15:26.164823 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:15:26.164828 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:15:26.164833 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:15:26.164838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:15:26.164844 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:15:26.164849 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:15:26.164854 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:15:26.164860 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 14 00:15:26.164867 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 14 00:15:26.164872 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:15:26.164877 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:15:26.164883 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:15:26.164888 kernel: landlock: Up and running. Mar 14 00:15:26.164893 kernel: SELinux: Initializing. Mar 14 00:15:26.164898 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:15:26.164903 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:15:26.164908 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Mar 14 00:15:26.164916 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:26.164921 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:26.164926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:15:26.164932 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 14 00:15:26.164937 kernel: ... version: 0 Mar 14 00:15:26.164942 kernel: ... bit width: 48 Mar 14 00:15:26.164947 kernel: ... generic registers: 6 Mar 14 00:15:26.164952 kernel: ... value mask: 0000ffffffffffff Mar 14 00:15:26.164957 kernel: ... max period: 00007fffffffffff Mar 14 00:15:26.164965 kernel: ... fixed-purpose events: 0 Mar 14 00:15:26.164970 kernel: ... event mask: 000000000000003f Mar 14 00:15:26.164975 kernel: signal: max sigframe size: 3376 Mar 14 00:15:26.164980 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:15:26.164985 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:15:26.164991 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:15:26.164996 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:15:26.165001 kernel: .... node #0, CPUs: #1 Mar 14 00:15:26.165006 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:15:26.165014 kernel: smpboot: Max logical packages: 1 Mar 14 00:15:26.165019 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Mar 14 00:15:26.165024 kernel: devtmpfs: initialized Mar 14 00:15:26.165029 kernel: x86/mm: Memory block size: 128MB Mar 14 00:15:26.165034 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 14 00:15:26.165039 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:15:26.165045 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:15:26.165050 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:15:26.165055 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:15:26.165062 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:15:26.165067 kernel: audit: type=2000 audit(1773447325.204:1): state=initialized audit_enabled=0 res=1 Mar 14 00:15:26.165072 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:15:26.165078 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:15:26.165083 kernel: cpuidle: using governor menu Mar 14 00:15:26.165088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:15:26.165093 kernel: dca service started, version 1.12.1 Mar 14 00:15:26.165098 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 14 00:15:26.165104 kernel: PCI: Using configuration type 1 for base access Mar 14 00:15:26.165111 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:15:26.165117 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:15:26.165122 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:15:26.165127 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:15:26.165132 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:15:26.165137 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:15:26.165142 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:15:26.165147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:15:26.165152 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:15:26.165160 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:15:26.165165 kernel: ACPI: Interpreter enabled Mar 14 00:15:26.165170 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:15:26.165175 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:15:26.165180 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:15:26.165185 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:15:26.165191 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:15:26.165196 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:15:26.165380 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:15:26.165493 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:15:26.165607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:15:26.165614 kernel: PCI host bridge to bus 0000:00 Mar 14 00:15:26.165715 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:15:26.165806 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:15:26.165895 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:15:26.165987 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 14 00:15:26.166074 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 14 00:15:26.166162 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:15:26.166252 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:15:26.166378 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:15:26.166485 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 14 00:15:26.166594 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Mar 14 00:15:26.166694 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Mar 14 00:15:26.166790 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Mar 14 00:15:26.166888 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:15:26.166985 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 14 00:15:26.167080 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:15:26.167187 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.167289 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Mar 14 00:15:26.168730 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.168855 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Mar 14 00:15:26.168964 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.169061 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Mar 14 00:15:26.169164 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.169265 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Mar 14 00:15:26.172462 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.172585 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Mar 14 00:15:26.172692 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.172790 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Mar 14 00:15:26.172894 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.172989 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Mar 14 00:15:26.173098 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.173196 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Mar 14 00:15:26.173297 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:15:26.173420 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Mar 14 00:15:26.173530 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:15:26.173636 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:15:26.173743 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:15:26.173839 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Mar 14 00:15:26.173936 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Mar 14 00:15:26.174039 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:15:26.174137 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Mar 14 00:15:26.174247 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:15:26.176472 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Mar 14 00:15:26.176598 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Mar 14 00:15:26.176702 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:15:26.176798 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:15:26.176892 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:15:26.176989 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:15:26.177097 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:15:26.177202 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Mar 14 00:15:26.177333 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:15:26.177435 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:15:26.177552 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:15:26.177654 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Mar 14 00:15:26.177760 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Mar 14 00:15:26.177857 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:15:26.177956 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:15:26.178052 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:15:26.178160 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:15:26.178260 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Mar 14 00:15:26.180393 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:15:26.180504 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:15:26.180625 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:15:26.180733 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Mar 14 00:15:26.180833 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Mar 14 00:15:26.180929 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:15:26.181024 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:15:26.181119 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:15:26.181229 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:15:26.181343 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Mar 14 00:15:26.181449 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Mar 14 00:15:26.181574 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:15:26.181673 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:15:26.181769 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:15:26.181776 kernel: acpiphp: Slot [0] registered Mar 14 00:15:26.181883 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:15:26.181984 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Mar 14 00:15:26.182084 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 14 00:15:26.182188 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:15:26.182284 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:15:26.184114 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:15:26.184216 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:15:26.184223 kernel: acpiphp: Slot [0-2] registered Mar 14 00:15:26.184395 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:15:26.184493 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:15:26.184597 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:15:26.184608 kernel: acpiphp: Slot [0-3] registered Mar 14 00:15:26.184704 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:15:26.184801 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:15:26.184898 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:15:26.184904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:15:26.184910 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:15:26.184915 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:15:26.184920 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:15:26.184928 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:15:26.184933 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:15:26.184939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:15:26.184944 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:15:26.184949 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:15:26.184954 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:15:26.184959 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:15:26.184964 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:15:26.184970 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:15:26.184978 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:15:26.184983 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:15:26.184988 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:15:26.184993 kernel: iommu: Default domain type: Translated Mar 14 00:15:26.184998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:15:26.185004 kernel: efivars: Registered efivars operations Mar 14 00:15:26.185009 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:15:26.185014 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:15:26.185019 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Mar 14 00:15:26.185027 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 14 00:15:26.185032 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Mar 14 00:15:26.185037 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Mar 14 00:15:26.185134 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:15:26.185229 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:15:26.185336 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:15:26.185343 kernel: vgaarb: loaded Mar 14 00:15:26.185349 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:15:26.185354 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:15:26.185362 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:15:26.185368 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:15:26.185373 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:15:26.185378 kernel: pnp: PnP ACPI init Mar 14 00:15:26.185483 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 14 00:15:26.185490 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:15:26.185495 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:15:26.185501 kernel: NET: Registered PF_INET protocol family Mar 14 00:15:26.185522 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:15:26.185529 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:15:26.185535 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:15:26.185549 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:15:26.185554 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:15:26.185560 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:15:26.185565 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:15:26.185571 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:15:26.185576 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:15:26.185585 kernel: NET: Registered PF_XDP protocol family Mar 14 00:15:26.185687 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:15:26.185794 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:15:26.185889 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:15:26.185986 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:15:26.186083 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:15:26.186179 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:15:26.186278 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:15:26.191891 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:15:26.192037 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Mar 14 00:15:26.192142 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:15:26.192249 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:15:26.192372 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:15:26.192474 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:15:26.192582 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:15:26.192685 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:15:26.192783 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:15:26.192881 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:15:26.192981 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:15:26.193077 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:15:26.193184 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:15:26.193280 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:15:26.193397 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:15:26.193498 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:15:26.193605 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:15:26.193701 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:15:26.193812 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Mar 14 00:15:26.193912 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:15:26.194012 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 14 00:15:26.194108 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:15:26.194204 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:15:26.194718 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:15:26.194833 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 14 00:15:26.194931 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:15:26.195027 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:15:26.195126 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:15:26.195227 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 14 00:15:26.195951 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:15:26.196066 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:15:26.196167 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:15:26.196278 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:15:26.196881 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:15:26.196987 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 14 00:15:26.197077 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 14 00:15:26.197166 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:15:26.197271 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Mar 14 00:15:26.197383 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:15:26.197485 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Mar 14 00:15:26.197600 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Mar 14 00:15:26.197696 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:15:26.197798 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:15:26.197899 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Mar 14 00:15:26.197993 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:15:26.198095 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Mar 14 00:15:26.198192 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:15:26.198294 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 14 00:15:26.198914 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Mar 14 00:15:26.199050 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:15:26.199175 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 14 00:15:26.199277 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Mar 14 00:15:26.199387 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:15:26.199497 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 14 00:15:26.199609 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Mar 14 00:15:26.199716 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:15:26.199730 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:15:26.199739 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:15:26.199748 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:15:26.199755 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Mar 14 00:15:26.199761 kernel: Initialise system trusted keyrings Mar 14 00:15:26.199771 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:15:26.199777 kernel: Key type asymmetric registered Mar 14 00:15:26.199782 kernel: Asymmetric key parser 'x509' registered Mar 14 00:15:26.199788 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:15:26.199794 kernel: io scheduler mq-deadline registered Mar 14 00:15:26.199802 kernel: io scheduler kyber registered Mar 14 00:15:26.199810 kernel: io scheduler bfq registered Mar 14 00:15:26.199997 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 14 00:15:26.200987 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 14 00:15:26.201135 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 14 00:15:26.201243 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 14 00:15:26.201371 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 14 00:15:26.201470 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 14 00:15:26.201601 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 14 00:15:26.201729 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 14 00:15:26.201838 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 14 00:15:26.201940 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 14 00:15:26.202045 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 14 00:15:26.202142 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 14 00:15:26.202240 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 14 00:15:26.202365 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 14 00:15:26.202471 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 14 00:15:26.202579 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 14 00:15:26.202587 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:15:26.202684 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 14 00:15:26.202785 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 14 00:15:26.202792 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:15:26.202798 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 14 00:15:26.202803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:15:26.202809 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:15:26.202815 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:15:26.202821 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:15:26.202826 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:15:26.202955 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:15:26.202972 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:15:26.203069 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:15:26.203169 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:15:25 UTC (1773447325) Mar 14 00:15:26.203262 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:15:26.203268 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:15:26.203275 kernel: efifb: probing for efifb Mar 14 00:15:26.203280 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Mar 14 00:15:26.203286 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 14 00:15:26.203295 kernel: efifb: scrolling: redraw Mar 14 00:15:26.203314 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:15:26.203320 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:15:26.203327 kernel: fb0: EFI VGA frame buffer device Mar 14 00:15:26.203336 kernel: pstore: Using crash dump compression: deflate Mar 14 00:15:26.203345 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:15:26.203354 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:15:26.203362 kernel: Segment Routing with IPv6 Mar 14 00:15:26.203370 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:15:26.203379 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:15:26.203386 kernel: Key type dns_resolver registered Mar 14 00:15:26.203395 kernel: IPI shorthand broadcast: enabled Mar 14 00:15:26.203404 kernel: sched_clock: Marking stable (1516012401, 206559756)->(1764075289, -41503132) Mar 14 00:15:26.203413 kernel: registered taskstats version 1 Mar 14 00:15:26.203421 kernel: Loading compiled-in X.509 certificates Mar 14 00:15:26.203428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:15:26.203438 kernel: Key type .fscrypt registered Mar 14 00:15:26.203446 kernel: Key type fscrypt-provisioning registered Mar 14 00:15:26.203458 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:15:26.203467 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:15:26.203476 kernel: ima: No architecture policies found Mar 14 00:15:26.203485 kernel: clk: Disabling unused clocks Mar 14 00:15:26.203494 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:15:26.203500 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:15:26.203506 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:15:26.203512 kernel: Run /init as init process Mar 14 00:15:26.203520 kernel: with arguments: Mar 14 00:15:26.203532 kernel: /init Mar 14 00:15:26.203548 kernel: with environment: Mar 14 00:15:26.203557 kernel: HOME=/ Mar 14 00:15:26.203566 kernel: TERM=linux Mar 14 00:15:26.203578 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:15:26.203589 systemd[1]: Detected virtualization kvm. Mar 14 00:15:26.203599 systemd[1]: Detected architecture x86-64. Mar 14 00:15:26.203611 systemd[1]: Running in initrd. Mar 14 00:15:26.203620 systemd[1]: No hostname configured, using default hostname. Mar 14 00:15:26.203628 systemd[1]: Hostname set to . Mar 14 00:15:26.203636 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:15:26.203645 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:15:26.203654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:15:26.203660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:15:26.203667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:15:26.203680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:15:26.203689 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:15:26.203698 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:15:26.203709 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:15:26.203719 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:15:26.203729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:15:26.203739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:15:26.203751 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:15:26.203760 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:15:26.203773 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:15:26.203782 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:15:26.203789 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:15:26.203795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:15:26.203801 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:15:26.203807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:15:26.203815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:15:26.203821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:15:26.203827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:15:26.203835 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:15:26.203844 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:15:26.203854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:15:26.203863 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:15:26.203873 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:15:26.203882 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:15:26.203894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:15:26.203900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:26.203906 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:15:26.203912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:15:26.203918 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:15:26.203925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:26.203934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:15:26.203968 systemd-journald[188]: Collecting audit messages is disabled. Mar 14 00:15:26.203988 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:15:26.203994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:15:26.204000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:15:26.204006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:15:26.204012 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:15:26.204019 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:15:26.204025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:15:26.204032 systemd-journald[188]: Journal started Mar 14 00:15:26.204048 systemd-journald[188]: Runtime Journal (/run/log/journal/88536a1b4700428ca5c5784bc8bdb727) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:15:26.206446 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:15:26.170384 systemd-modules-load[189]: Inserted module 'overlay' Mar 14 00:15:26.209149 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 14 00:15:26.210399 kernel: Bridge firewalling registered Mar 14 00:15:26.211182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:15:26.211709 dracut-cmdline[208]: dracut-dracut-053 Mar 14 00:15:26.216373 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:15:26.221452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:26.223995 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:15:26.234155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:26.240153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:15:26.246588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:15:26.283418 systemd-resolved[250]: Positive Trust Anchors: Mar 14 00:15:26.284035 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:15:26.284059 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:15:26.287184 systemd-resolved[250]: Defaulting to hostname 'linux'. Mar 14 00:15:26.289257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:15:26.289750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:15:26.304341 kernel: SCSI subsystem initialized Mar 14 00:15:26.312339 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:15:26.320328 kernel: iscsi: registered transport (tcp) Mar 14 00:15:26.337516 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:15:26.337581 kernel: QLogic iSCSI HBA Driver Mar 14 00:15:26.396780 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:15:26.402695 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:15:26.438583 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:15:26.438685 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:15:26.441403 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:15:26.505380 kernel: raid6: avx512x4 gen() 18895 MB/s Mar 14 00:15:26.524387 kernel: raid6: avx512x2 gen() 23787 MB/s Mar 14 00:15:26.542378 kernel: raid6: avx512x1 gen() 33481 MB/s Mar 14 00:15:26.560394 kernel: raid6: avx2x4 gen() 45663 MB/s Mar 14 00:15:26.578358 kernel: raid6: avx2x2 gen() 48178 MB/s Mar 14 00:15:26.597105 kernel: raid6: avx2x1 gen() 38084 MB/s Mar 14 00:15:26.597160 kernel: raid6: using algorithm avx2x2 gen() 48178 MB/s Mar 14 00:15:26.616159 kernel: raid6: .... xor() 37279 MB/s, rmw enabled Mar 14 00:15:26.616228 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:15:26.633365 kernel: xor: automatically using best checksumming function avx Mar 14 00:15:26.757375 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:15:26.776567 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:15:26.785635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:15:26.799058 systemd-udevd[407]: Using default interface naming scheme 'v255'. Mar 14 00:15:26.803080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:15:26.815142 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:15:26.834264 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Mar 14 00:15:26.875359 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:15:26.881482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:15:26.984886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:15:26.995628 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:15:27.030122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:15:27.034196 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:15:27.035428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:15:27.038626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:15:27.045602 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:15:27.070935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:15:27.108675 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:15:27.129325 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:15:27.133437 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:15:27.137142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:15:27.137238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:15:27.140184 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:15:27.142154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:15:27.142292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:27.142880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:27.151375 kernel: ACPI: bus type USB registered Mar 14 00:15:27.154252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:27.161919 kernel: usbcore: registered new interface driver usbfs Mar 14 00:15:27.164395 kernel: usbcore: registered new interface driver hub Mar 14 00:15:27.167348 kernel: libata version 3.00 loaded. Mar 14 00:15:27.169096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:15:27.172992 kernel: usbcore: registered new device driver usb Mar 14 00:15:27.169726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:27.180543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:27.197642 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:15:27.210338 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:15:27.210608 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:15:27.214045 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:15:27.214686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:27.236452 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:15:27.236731 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:15:27.236898 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:15:27.237057 kernel: hub 1-0:1.0: USB hub found Mar 14 00:15:27.237255 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:15:27.238494 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:15:27.238739 kernel: hub 2-0:1.0: USB hub found Mar 14 00:15:27.239838 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:15:27.240025 kernel: AES CTR mode by8 optimization enabled Mar 14 00:15:27.240038 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:15:27.240712 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:15:27.237461 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:15:27.247859 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:15:27.248109 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:15:27.251342 kernel: scsi host1: ahci Mar 14 00:15:27.254682 kernel: scsi host2: ahci Mar 14 00:15:27.254903 kernel: scsi host3: ahci Mar 14 00:15:27.259667 kernel: scsi host4: ahci Mar 14 00:15:27.259891 kernel: scsi host5: ahci Mar 14 00:15:27.264855 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 14 00:15:27.268858 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Mar 14 00:15:27.269083 kernel: scsi host6: ahci Mar 14 00:15:27.279344 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 Mar 14 00:15:27.279408 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 Mar 14 00:15:27.279430 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 Mar 14 00:15:27.279442 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:15:27.279710 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 Mar 14 00:15:27.279723 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 14 00:15:27.279890 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 Mar 14 00:15:27.279902 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:15:27.280080 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 Mar 14 00:15:27.290172 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:15:27.290238 kernel: GPT:17805311 != 160006143 Mar 14 00:15:27.290252 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:15:27.290265 kernel: GPT:17805311 != 160006143 Mar 14 00:15:27.290276 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:15:27.290287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:15:27.295943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:15:27.300747 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:15:27.463434 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:15:27.607277 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:15:27.607388 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:15:27.612368 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:15:27.614356 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:15:27.614418 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:15:27.614449 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:15:27.629399 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:15:27.629475 kernel: ata1.00: applying bridge limits Mar 14 00:15:27.635827 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:15:27.636036 kernel: ata1.00: configured for UDMA/100 Mar 14 00:15:27.645414 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:15:27.690119 kernel: usbcore: registered new interface driver usbhid Mar 14 00:15:27.690198 kernel: usbhid: USB HID core driver Mar 14 00:15:27.706863 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 14 00:15:27.706938 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:15:27.745333 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:15:27.745960 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:15:27.759365 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (459) Mar 14 00:15:27.766373 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:15:27.781367 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (457) Mar 14 00:15:27.785539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:15:27.789606 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:15:27.793003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:15:27.793746 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:15:27.800575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:15:27.805671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:15:27.813979 disk-uuid[591]: Primary Header is updated. Mar 14 00:15:27.813979 disk-uuid[591]: Secondary Entries is updated. Mar 14 00:15:27.813979 disk-uuid[591]: Secondary Header is updated. Mar 14 00:15:27.828352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:15:28.839984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:15:28.840072 disk-uuid[593]: The operation has completed successfully. Mar 14 00:15:28.914396 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:15:28.914587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:15:28.940565 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:15:28.945379 sh[607]: Success Mar 14 00:15:28.969336 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:15:29.033414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:15:29.044769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:15:29.051257 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:15:29.086369 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:15:29.086449 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:15:29.092105 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:15:29.102366 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:15:29.102445 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:15:29.122392 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:15:29.125846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:15:29.127916 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:15:29.146774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:15:29.151618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:15:29.170353 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:15:29.170444 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:15:29.170472 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:15:29.183911 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:15:29.183963 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:15:29.204605 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:15:29.206584 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:15:29.215765 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:15:29.224666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:15:29.303549 ignition[709]: Ignition 2.19.0 Mar 14 00:15:29.304136 ignition[709]: Stage: fetch-offline Mar 14 00:15:29.304186 ignition[709]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:29.304204 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:29.305322 ignition[709]: parsed url from cmdline: "" Mar 14 00:15:29.305330 ignition[709]: no config URL provided Mar 14 00:15:29.305336 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:15:29.305347 ignition[709]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:15:29.305352 ignition[709]: failed to fetch config: resource requires networking Mar 14 00:15:29.305529 ignition[709]: Ignition finished successfully Mar 14 00:15:29.308241 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:15:29.315112 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:15:29.322541 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:15:29.342717 systemd-networkd[793]: lo: Link UP Mar 14 00:15:29.342733 systemd-networkd[793]: lo: Gained carrier Mar 14 00:15:29.347178 systemd-networkd[793]: Enumeration completed Mar 14 00:15:29.347642 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:15:29.348369 systemd[1]: Reached target network.target - Network. Mar 14 00:15:29.348463 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:29.348471 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:15:29.350707 systemd-networkd[793]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:29.350719 systemd-networkd[793]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:15:29.352364 systemd-networkd[793]: eth0: Link UP Mar 14 00:15:29.352372 systemd-networkd[793]: eth0: Gained carrier Mar 14 00:15:29.352384 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:29.355448 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:15:29.357276 systemd-networkd[793]: eth1: Link UP Mar 14 00:15:29.357284 systemd-networkd[793]: eth1: Gained carrier Mar 14 00:15:29.357297 systemd-networkd[793]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:29.365684 ignition[796]: Ignition 2.19.0 Mar 14 00:15:29.365690 ignition[796]: Stage: fetch Mar 14 00:15:29.365870 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:29.365880 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:29.365990 ignition[796]: parsed url from cmdline: "" Mar 14 00:15:29.365995 ignition[796]: no config URL provided Mar 14 00:15:29.366000 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:15:29.366007 ignition[796]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:15:29.366022 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:15:29.366574 ignition[796]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:15:29.387359 systemd-networkd[793]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:15:29.412416 systemd-networkd[793]: eth0: DHCPv4 address 204.168.148.110/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:15:29.566811 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:15:29.577965 ignition[796]: GET result: OK Mar 14 00:15:29.578133 ignition[796]: parsing config with SHA512: 71df4b4a7f18438768a24391237b68c0b3165fe0614cbd284f5fb355f4cc9dabadebcbf18278854dae8c4d31c973d07a30227066189b09b9534139192cb2b452 Mar 14 00:15:29.583909 unknown[796]: fetched base config from "system" Mar 14 00:15:29.583928 unknown[796]: fetched base config from "system" Mar 14 00:15:29.584523 ignition[796]: fetch: fetch complete Mar 14 00:15:29.583940 unknown[796]: fetched user config from "hetzner" Mar 14 00:15:29.584534 ignition[796]: fetch: fetch passed Mar 14 00:15:29.589725 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:15:29.584647 ignition[796]: Ignition finished successfully Mar 14 00:15:29.601599 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:15:29.628228 ignition[803]: Ignition 2.19.0 Mar 14 00:15:29.628248 ignition[803]: Stage: kargs Mar 14 00:15:29.628553 ignition[803]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:29.628575 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:29.630025 ignition[803]: kargs: kargs passed Mar 14 00:15:29.634947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:15:29.630110 ignition[803]: Ignition finished successfully Mar 14 00:15:29.642623 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:15:29.675285 ignition[809]: Ignition 2.19.0 Mar 14 00:15:29.675334 ignition[809]: Stage: disks Mar 14 00:15:29.675614 ignition[809]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:29.680235 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:15:29.675643 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:29.676936 ignition[809]: disks: disks passed Mar 14 00:15:29.682951 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:15:29.677015 ignition[809]: Ignition finished successfully Mar 14 00:15:29.684414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:15:29.685963 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:15:29.687437 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:15:29.688914 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:15:29.696695 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:15:29.726123 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:15:29.730928 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:15:29.739483 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:15:29.861755 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:15:29.861914 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:15:29.862760 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:15:29.868391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:15:29.871379 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:15:29.875429 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:15:29.875788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:15:29.875811 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:15:29.886199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:15:29.895692 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (826) Mar 14 00:15:29.900727 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:15:29.900750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:15:29.901324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:15:29.901339 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:15:29.916475 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:15:29.916576 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:15:29.929840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:15:29.951005 coreos-metadata[828]: Mar 14 00:15:29.950 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:15:29.954220 coreos-metadata[828]: Mar 14 00:15:29.953 INFO Fetch successful Mar 14 00:15:29.959122 coreos-metadata[828]: Mar 14 00:15:29.957 INFO wrote hostname ci-4081-3-6-n-8ea3e741de to /sysroot/etc/hostname Mar 14 00:15:29.959721 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:15:29.960231 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:15:29.968105 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:15:29.976467 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:15:29.981349 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:15:30.112224 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:15:30.116371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:15:30.118423 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:15:30.138797 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:15:30.145539 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:15:30.151788 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:15:30.187889 ignition[945]: INFO : Ignition 2.19.0 Mar 14 00:15:30.187889 ignition[945]: INFO : Stage: mount Mar 14 00:15:30.188937 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:30.188937 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:30.190070 ignition[945]: INFO : mount: mount passed Mar 14 00:15:30.190453 ignition[945]: INFO : Ignition finished successfully Mar 14 00:15:30.192464 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:15:30.196380 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:15:30.226421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:15:30.241363 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Mar 14 00:15:30.250361 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:15:30.250409 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:15:30.250431 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:15:30.261418 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:15:30.261480 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:15:30.270099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:15:30.310006 ignition[970]: INFO : Ignition 2.19.0 Mar 14 00:15:30.310006 ignition[970]: INFO : Stage: files Mar 14 00:15:30.313107 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:30.313107 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:30.313107 ignition[970]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:15:30.313107 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:15:30.313107 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:15:30.319222 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:15:30.319222 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:15:30.319222 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:15:30.318275 unknown[970]: wrote ssh authorized keys file for user: core Mar 14 00:15:30.323494 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:15:30.323494 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:15:30.518903 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:15:30.592495 systemd-networkd[793]: eth1: Gained IPv6LL Mar 14 00:15:30.656640 systemd-networkd[793]: eth0: Gained IPv6LL Mar 14 00:15:30.834133 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:15:30.834133 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:15:30.836865 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:15:31.135466 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:15:31.244318 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:15:31.245380 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:15:31.252332 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 14 00:15:31.596642 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:15:31.887201 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:15:31.887201 ignition[970]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:15:31.890209 ignition[970]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:15:31.890209 ignition[970]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:15:31.890209 ignition[970]: INFO : files: files passed Mar 14 00:15:31.915494 ignition[970]: INFO : Ignition finished successfully Mar 14 00:15:31.894883 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:15:31.905632 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:15:31.909738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:15:31.925980 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:15:31.926248 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:15:31.941002 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:15:31.941002 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:15:31.943913 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:15:31.946948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:15:31.950365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:15:31.959677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:15:32.007378 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:15:32.007603 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:15:32.010438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:15:32.012805 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:15:32.014044 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:15:32.019536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:15:32.051538 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:15:32.064664 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:15:32.083731 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:15:32.085110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:15:32.086998 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:15:32.088604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:15:32.088889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:15:32.091106 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:15:32.092948 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:15:32.094568 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:15:32.096094 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:15:32.097642 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:15:32.099348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:15:32.100915 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:15:32.102772 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:15:32.104551 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:15:32.106432 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:15:32.107997 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:15:32.108203 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:15:32.110604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:15:32.112391 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:15:32.114057 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:15:32.114367 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:15:32.115851 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:15:32.116050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:15:32.118162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:15:32.118481 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:15:32.119944 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:15:32.120118 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:15:32.121581 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:15:32.121775 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:15:32.129723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:15:32.131344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:15:32.132530 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:15:32.135650 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:15:32.141423 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:15:32.141731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:15:32.145700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:15:32.146199 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:15:32.156597 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:15:32.157541 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:15:32.168785 ignition[1025]: INFO : Ignition 2.19.0 Mar 14 00:15:32.170423 ignition[1025]: INFO : Stage: umount Mar 14 00:15:32.170423 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:15:32.170423 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:15:32.175074 ignition[1025]: INFO : umount: umount passed Mar 14 00:15:32.175074 ignition[1025]: INFO : Ignition finished successfully Mar 14 00:15:32.179909 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:15:32.181109 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:15:32.183237 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:15:32.184415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:15:32.186503 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:15:32.186600 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:15:32.188489 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:15:32.188588 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:15:32.189373 systemd[1]: Stopped target network.target - Network. Mar 14 00:15:32.190021 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:15:32.190095 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:15:32.190938 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:15:32.191633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:15:32.196475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:15:32.197363 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:15:32.200274 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:15:32.203421 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:15:32.205110 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:15:32.206550 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:15:32.206660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:15:32.208276 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:15:32.208411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:15:32.209985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:15:32.210100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:15:32.214450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:15:32.215208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:15:32.220154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:15:32.222375 systemd-networkd[793]: eth0: DHCPv6 lease lost Mar 14 00:15:32.228375 systemd-networkd[793]: eth1: DHCPv6 lease lost Mar 14 00:15:32.231424 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:15:32.231660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:15:32.236850 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:15:32.238573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:15:32.241791 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:15:32.242581 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:15:32.247450 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:15:32.248066 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:15:32.248146 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:15:32.248862 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:15:32.248928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:32.252214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:15:32.252291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:15:32.253731 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:15:32.253804 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:15:32.256987 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:15:32.261185 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:15:32.261447 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:15:32.270693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:15:32.270853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:15:32.284956 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:15:32.286172 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:15:32.287874 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:15:32.288121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:15:32.289911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:15:32.290030 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:15:32.291279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:15:32.291477 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:15:32.292763 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:15:32.292847 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:15:32.294941 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:15:32.295020 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:15:32.297106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:15:32.297183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:15:32.308682 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:15:32.310062 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:15:32.310164 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:15:32.314098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:15:32.314222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:32.322176 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:15:32.322376 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:15:32.325248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:15:32.331536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:15:32.351148 systemd[1]: Switching root. Mar 14 00:15:32.402806 systemd-journald[188]: Journal stopped Mar 14 00:15:33.955118 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 14 00:15:33.955190 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:15:33.955201 kernel: SELinux: policy capability open_perms=1 Mar 14 00:15:33.955214 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:15:33.955222 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:15:33.955231 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:15:33.955242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:15:33.955254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:15:33.955263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:15:33.955271 kernel: audit: type=1403 audit(1773447332.611:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:15:33.955286 systemd[1]: Successfully loaded SELinux policy in 81ms. Mar 14 00:15:33.955317 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.187ms. Mar 14 00:15:33.955327 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:15:33.955337 systemd[1]: Detected virtualization kvm. Mar 14 00:15:33.955346 systemd[1]: Detected architecture x86-64. Mar 14 00:15:33.955354 systemd[1]: Detected first boot. Mar 14 00:15:33.955363 systemd[1]: Hostname set to . Mar 14 00:15:33.955372 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:15:33.955381 zram_generator::config[1069]: No configuration found. Mar 14 00:15:33.955397 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:15:33.955406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:15:33.955415 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:15:33.955424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:15:33.955433 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:15:33.955446 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:15:33.955455 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:15:33.955466 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:15:33.955475 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:15:33.955486 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:15:33.955494 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:15:33.955506 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:15:33.955516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:15:33.955525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:15:33.955533 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:15:33.955543 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:15:33.955552 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:15:33.955563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:15:33.955572 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:15:33.955580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:15:33.955589 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:15:33.955598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:15:33.955607 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:15:33.955618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:15:33.955627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:15:33.955644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:15:33.955653 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:15:33.955662 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:15:33.955671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:15:33.955680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:15:33.955689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:15:33.955698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:15:33.955707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:15:33.955719 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:15:33.955727 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:15:33.955736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:15:33.955748 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:15:33.955757 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:33.955766 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:15:33.955775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:15:33.955784 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:15:33.955793 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:15:33.955804 systemd[1]: Reached target machines.target - Containers. Mar 14 00:15:33.955812 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:15:33.955821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:15:33.955830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:15:33.955839 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:15:33.955848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:15:33.955857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:15:33.955865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:15:33.955876 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:15:33.955885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:15:33.955893 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:15:33.955903 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:15:33.955914 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:15:33.955923 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:15:33.955931 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:15:33.955943 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:15:33.955952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:15:33.955960 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:15:33.955969 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:15:33.955978 kernel: fuse: init (API version 7.39) Mar 14 00:15:33.955989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:15:33.955998 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:15:33.956007 systemd[1]: Stopped verity-setup.service. Mar 14 00:15:33.956015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:33.956024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:15:33.956033 kernel: loop: module loaded Mar 14 00:15:33.956057 systemd-journald[1149]: Collecting audit messages is disabled. Mar 14 00:15:33.956073 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:15:33.956085 systemd-journald[1149]: Journal started Mar 14 00:15:33.956101 systemd-journald[1149]: Runtime Journal (/run/log/journal/88536a1b4700428ca5c5784bc8bdb727) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:15:33.579222 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:15:33.597755 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:15:33.598728 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:15:33.959617 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:15:33.961223 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:15:33.962995 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:15:33.967455 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:15:33.968369 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:15:33.969648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:15:33.970891 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:15:33.971529 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:15:33.972280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:15:33.972757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:15:33.975060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:15:33.975218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:15:33.976643 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:15:33.976802 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:15:33.977588 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:15:33.978345 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:15:33.978944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:15:33.979888 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:15:33.981156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:15:33.992351 kernel: ACPI: bus type drm_connector registered Mar 14 00:15:33.993719 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:15:33.994377 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:15:33.994510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:15:33.998813 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:15:34.006391 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:15:34.011872 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:15:34.012699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:15:34.012728 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:15:34.013987 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:15:34.018398 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:15:34.027528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:15:34.028374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:15:34.032411 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:15:34.034443 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:15:34.035132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:15:34.037381 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:15:34.039375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:15:34.040443 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:34.044179 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:15:34.046223 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:15:34.049357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:15:34.050453 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:15:34.051010 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:15:34.060738 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:15:34.062011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:15:34.068460 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:15:34.084456 systemd-journald[1149]: Time spent on flushing to /var/log/journal/88536a1b4700428ca5c5784bc8bdb727 is 55.424ms for 1184 entries. Mar 14 00:15:34.084456 systemd-journald[1149]: System Journal (/var/log/journal/88536a1b4700428ca5c5784bc8bdb727) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:15:34.176453 systemd-journald[1149]: Received client request to flush runtime journal. Mar 14 00:15:34.176485 kernel: loop0: detected capacity change from 0 to 8 Mar 14 00:15:34.176499 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:15:34.176510 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:15:34.122252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:15:34.124865 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:15:34.144423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:34.163833 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:15:34.180514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:15:34.181750 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:15:34.224826 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:15:34.235414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:15:34.243277 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:15:34.248579 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 14 00:15:34.248595 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 14 00:15:34.261323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:15:34.274446 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:15:34.281333 kernel: loop3: detected capacity change from 0 to 219192 Mar 14 00:15:34.323329 kernel: loop4: detected capacity change from 0 to 8 Mar 14 00:15:34.326351 kernel: loop5: detected capacity change from 0 to 140768 Mar 14 00:15:34.340537 kernel: loop6: detected capacity change from 0 to 142488 Mar 14 00:15:34.357476 kernel: loop7: detected capacity change from 0 to 219192 Mar 14 00:15:34.376948 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:15:34.377551 (sd-merge)[1214]: Merged extensions into '/usr'. Mar 14 00:15:34.385175 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:15:34.385260 systemd[1]: Reloading... Mar 14 00:15:34.463330 zram_generator::config[1239]: No configuration found. Mar 14 00:15:34.550915 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:15:34.579776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:34.616288 systemd[1]: Reloading finished in 229 ms. Mar 14 00:15:34.646499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:15:34.647436 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:15:34.650591 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:15:34.657436 systemd[1]: Starting ensure-sysext.service... Mar 14 00:15:34.661464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:15:34.666485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:15:34.682400 systemd[1]: Reloading requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:15:34.682414 systemd[1]: Reloading... Mar 14 00:15:34.695370 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:15:34.696809 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:15:34.698979 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:15:34.699664 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Mar 14 00:15:34.699930 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Mar 14 00:15:34.708218 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:15:34.710963 systemd-tmpfiles[1285]: Skipping /boot Mar 14 00:15:34.714098 systemd-udevd[1286]: Using default interface naming scheme 'v255'. Mar 14 00:15:34.747916 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:15:34.747944 systemd-tmpfiles[1285]: Skipping /boot Mar 14 00:15:34.772360 zram_generator::config[1314]: No configuration found. Mar 14 00:15:34.911882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:34.945505 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:15:34.956322 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1339) Mar 14 00:15:34.956376 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:15:34.974498 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:15:34.981807 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:15:34.982785 systemd[1]: Reloading finished in 299 ms. Mar 14 00:15:34.999995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:15:35.001704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:15:35.026402 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:15:35.033016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:35.038396 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:15:35.040863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:15:35.042458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:15:35.044152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:15:35.047406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:15:35.049468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:15:35.049940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:15:35.052486 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:15:35.057967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:15:35.069547 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:15:35.087240 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:15:35.087635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:35.097354 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:15:35.098678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:15:35.099491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:15:35.125793 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:15:35.126826 augenrules[1419]: No rules Mar 14 00:15:35.130390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:15:35.133056 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:15:35.134208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:15:35.134699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:15:35.135937 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:15:35.136352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:15:35.137767 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:15:35.145949 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:15:35.148399 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 14 00:15:35.148586 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:15:35.161900 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:35.162087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:15:35.171467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:15:35.177557 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:15:35.179797 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:15:35.183911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:15:35.184328 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 14 00:15:35.185062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:15:35.187473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:15:35.187949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:15:35.191346 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:15:35.193783 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:15:35.201803 kernel: Console: switching to colour dummy device 80x25 Mar 14 00:15:35.203452 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:15:35.206934 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 14 00:15:35.207125 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:15:35.207150 kernel: [drm] features: -context_init Mar 14 00:15:35.207335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:15:35.209374 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:15:35.209779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:15:35.210828 kernel: [drm] number of scanouts: 1 Mar 14 00:15:35.210851 kernel: [drm] number of cap sets: 0 Mar 14 00:15:35.210336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:15:35.211550 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:15:35.211688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:15:35.212684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:15:35.212810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:15:35.217101 systemd[1]: Finished ensure-sysext.service. Mar 14 00:15:35.218522 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:15:35.218683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:15:35.225321 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:15:35.237876 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:15:35.238238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:15:35.238783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:15:35.243589 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:15:35.244348 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:15:35.246346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:15:35.251396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:35.251805 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:15:35.266699 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 14 00:15:35.266750 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:15:35.287975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:15:35.288243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:35.290242 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:15:35.301458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:15:35.311533 systemd-networkd[1401]: lo: Link UP Mar 14 00:15:35.311539 systemd-networkd[1401]: lo: Gained carrier Mar 14 00:15:35.315867 systemd-networkd[1401]: Enumeration completed Mar 14 00:15:35.315985 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:15:35.319609 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:35.319615 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:15:35.320439 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:15:35.322403 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:35.322416 systemd-networkd[1401]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:15:35.323358 systemd-networkd[1401]: eth0: Link UP Mar 14 00:15:35.323364 systemd-networkd[1401]: eth0: Gained carrier Mar 14 00:15:35.323374 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:35.326185 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:15:35.329541 systemd-networkd[1401]: eth1: Link UP Mar 14 00:15:35.329588 systemd-networkd[1401]: eth1: Gained carrier Mar 14 00:15:35.329624 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:15:35.335494 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:15:35.352166 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:15:35.355045 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:15:35.355173 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:15:35.361227 systemd-resolved[1406]: Positive Trust Anchors: Mar 14 00:15:35.361244 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:15:35.361282 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:15:35.364418 systemd-networkd[1401]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:15:35.365347 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Mar 14 00:15:35.365713 systemd-resolved[1406]: Using system hostname 'ci-4081-3-6-n-8ea3e741de'. Mar 14 00:15:35.367575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:15:35.369354 systemd[1]: Reached target network.target - Network. Mar 14 00:15:35.369406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:15:35.380011 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:15:35.380768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:15:35.384509 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:15:35.391362 systemd-networkd[1401]: eth0: DHCPv4 address 204.168.148.110/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:15:35.391500 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:15:35.391932 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Mar 14 00:15:35.402039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:15:35.403151 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:15:35.403444 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:15:35.404346 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:15:35.404603 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:15:35.404762 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:15:35.404827 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:15:35.404884 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:15:35.404906 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:15:35.404952 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:15:35.408682 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:15:35.410549 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:15:35.417836 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:15:35.418804 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:15:35.419543 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:15:35.421530 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:15:35.421886 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:15:35.422286 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:15:35.422666 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:15:35.431380 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:15:35.432942 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:15:35.436471 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:15:35.439081 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:15:35.441453 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:15:35.445088 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:15:35.448225 jq[1479]: false Mar 14 00:15:35.452436 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:15:35.456525 coreos-metadata[1477]: Mar 14 00:15:35.456 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:15:35.458481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:15:35.461444 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:15:35.464620 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:15:35.466266 coreos-metadata[1477]: Mar 14 00:15:35.466 INFO Fetch successful Mar 14 00:15:35.466411 coreos-metadata[1477]: Mar 14 00:15:35.466 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:15:35.466546 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:15:35.470010 coreos-metadata[1477]: Mar 14 00:15:35.468 INFO Fetch successful Mar 14 00:15:35.479983 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:15:35.480790 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:15:35.481170 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:15:35.485452 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:15:35.487809 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:15:35.499356 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:15:35.499534 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:15:35.500691 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:15:35.501166 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:15:35.507217 jq[1493]: true Mar 14 00:15:35.508201 extend-filesystems[1480]: Found loop4 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found loop5 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found loop6 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found loop7 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda1 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda2 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda3 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found usr Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda4 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda6 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda7 Mar 14 00:15:35.517320 extend-filesystems[1480]: Found sda9 Mar 14 00:15:35.517320 extend-filesystems[1480]: Checking size of /dev/sda9 Mar 14 00:15:35.542727 dbus-daemon[1478]: [system] SELinux support is enabled Mar 14 00:15:35.538256 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:15:35.564164 update_engine[1492]: I20260314 00:15:35.542403 1492 main.cc:92] Flatcar Update Engine starting Mar 14 00:15:35.564164 update_engine[1492]: I20260314 00:15:35.556043 1492 update_check_scheduler.cc:74] Next update check in 5m29s Mar 14 00:15:35.564439 extend-filesystems[1480]: Resized partition /dev/sda9 Mar 14 00:15:35.538471 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:15:35.568780 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:15:35.554164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:15:35.554460 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:15:35.558228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:15:35.558250 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:15:35.560501 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:15:35.560516 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:15:35.561734 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:15:35.564213 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:15:35.586144 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Mar 14 00:15:35.586182 tar[1499]: linux-amd64/LICENSE Mar 14 00:15:35.586182 tar[1499]: linux-amd64/helm Mar 14 00:15:35.586383 jq[1508]: true Mar 14 00:15:35.642011 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:15:35.643684 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:15:35.673805 systemd-logind[1490]: New seat seat0. Mar 14 00:15:35.681932 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 00:15:35.681958 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:15:35.682133 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:15:35.697331 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1320) Mar 14 00:15:35.734803 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:15:35.737610 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:15:35.749496 systemd[1]: Starting sshkeys.service... Mar 14 00:15:35.770884 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:15:35.780816 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:15:35.821529 containerd[1511]: time="2026-03-14T00:15:35.819697427Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:15:35.836590 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:15:35.845867 coreos-metadata[1554]: Mar 14 00:15:35.845 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:15:35.847856 coreos-metadata[1554]: Mar 14 00:15:35.847 INFO Fetch successful Mar 14 00:15:35.857202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:15:35.857633 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:15:35.861591 unknown[1554]: wrote ssh authorized keys file for user: core Mar 14 00:15:35.864571 containerd[1511]: time="2026-03-14T00:15:35.864538884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.866172 containerd[1511]: time="2026-03-14T00:15:35.866145076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:15:35.866172 containerd[1511]: time="2026-03-14T00:15:35.866168146Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866180266Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866437516Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866450156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866501786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866510136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866715206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866727086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866736436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866743286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.866812466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869442 containerd[1511]: time="2026-03-14T00:15:35.867248447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:15:35.867474 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:15:35.869633 containerd[1511]: time="2026-03-14T00:15:35.867376147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:15:35.869633 containerd[1511]: time="2026-03-14T00:15:35.867386437Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:15:35.869633 containerd[1511]: time="2026-03-14T00:15:35.867459987Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:15:35.869633 containerd[1511]: time="2026-03-14T00:15:35.867495747Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:15:35.880556 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:15:35.880740 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:15:35.884420 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:15:35.890097 containerd[1511]: time="2026-03-14T00:15:35.890049736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:15:35.890423 containerd[1511]: time="2026-03-14T00:15:35.890213646Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:15:35.890423 containerd[1511]: time="2026-03-14T00:15:35.890230446Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:15:35.890423 containerd[1511]: time="2026-03-14T00:15:35.890268726Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:15:35.890423 containerd[1511]: time="2026-03-14T00:15:35.890279936Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.893816599Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894009109Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894099409Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894111319Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894121279Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894131039Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894139949Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.894490 containerd[1511]: time="2026-03-14T00:15:35.894149449Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895121040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895141910Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895154560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895164870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895174050Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895205770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895216450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895225800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895235350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895283710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895295120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895322790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895332420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902021 containerd[1511]: time="2026-03-14T00:15:35.895341520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.900428 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895351510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895380960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895406200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895415330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895425640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895440920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895449900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895460900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895533160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895564450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895572620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895581790Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:15:35.902347 containerd[1511]: time="2026-03-14T00:15:35.895588480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902794 containerd[1511]: time="2026-03-14T00:15:35.895597760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:15:35.902794 containerd[1511]: time="2026-03-14T00:15:35.895605260Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:15:35.902794 containerd[1511]: time="2026-03-14T00:15:35.895615170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.895941571Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.896004161Z" level=info msg="Connect containerd service" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.896076531Z" level=info msg="using legacy CRI server" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.896084071Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.896148831Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.899763964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900012544Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900053484Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900080324Z" level=info msg="Start subscribing containerd event" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900116114Z" level=info msg="Start recovering state" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900163204Z" level=info msg="Start event monitor" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900174994Z" level=info msg="Start snapshots syncer" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900182284Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900187734Z" level=info msg="Start streaming server" Mar 14 00:15:35.902833 containerd[1511]: time="2026-03-14T00:15:35.900226644Z" level=info msg="containerd successfully booted in 0.083557s" Mar 14 00:15:35.907458 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:15:35.914751 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:15:35.916570 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:15:35.917031 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:15:35.921902 update-ssh-keys[1573]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:15:35.922220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:15:35.924842 systemd[1]: Finished sshkeys.service. Mar 14 00:15:35.936341 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Mar 14 00:15:35.961633 extend-filesystems[1519]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:15:35.961633 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 14 00:15:35.961633 extend-filesystems[1519]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Mar 14 00:15:35.963360 extend-filesystems[1480]: Resized filesystem in /dev/sda9 Mar 14 00:15:35.963360 extend-filesystems[1480]: Found sr0 Mar 14 00:15:35.962415 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:15:35.962639 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:15:36.195385 tar[1499]: linux-amd64/README.md Mar 14 00:15:36.206570 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:15:36.864714 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 14 00:15:36.866608 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Mar 14 00:15:36.870186 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:15:36.872822 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:15:36.884595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:36.888256 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:15:36.924741 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:15:37.248779 systemd-networkd[1401]: eth1: Gained IPv6LL Mar 14 00:15:37.249902 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Mar 14 00:15:37.580760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:37.582290 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:15:37.583759 systemd[1]: Startup finished in 1.709s (kernel) + 6.758s (initrd) + 5.052s (userspace) = 13.520s. Mar 14 00:15:37.586036 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:38.349175 kubelet[1606]: E0314 00:15:38.349068 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:38.351611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:38.352037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:38.352750 systemd[1]: kubelet.service: Consumed 1.062s CPU time. Mar 14 00:15:48.512138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:15:48.526026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:48.672426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:48.676859 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:48.730113 kubelet[1625]: E0314 00:15:48.729976 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:48.735196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:48.735628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:56.834930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:15:56.842724 systemd[1]: Started sshd@0-204.168.148.110:22-68.220.241.50:53606.service - OpenSSH per-connection server daemon (68.220.241.50:53606). Mar 14 00:15:57.618233 sshd[1632]: Accepted publickey for core from 68.220.241.50 port 53606 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:15:57.619775 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:57.628901 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:15:57.634852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:15:57.637902 systemd-logind[1490]: New session 1 of user core. Mar 14 00:15:57.651605 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:15:57.658608 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:15:57.671951 (systemd)[1636]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:15:57.780141 systemd[1636]: Queued start job for default target default.target. Mar 14 00:15:57.784550 systemd[1636]: Created slice app.slice - User Application Slice. Mar 14 00:15:57.784586 systemd[1636]: Reached target paths.target - Paths. Mar 14 00:15:57.784599 systemd[1636]: Reached target timers.target - Timers. Mar 14 00:15:57.786094 systemd[1636]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:15:57.799586 systemd[1636]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:15:57.799759 systemd[1636]: Reached target sockets.target - Sockets. Mar 14 00:15:57.799781 systemd[1636]: Reached target basic.target - Basic System. Mar 14 00:15:57.799827 systemd[1636]: Reached target default.target - Main User Target. Mar 14 00:15:57.799870 systemd[1636]: Startup finished in 119ms. Mar 14 00:15:57.800745 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:15:57.812539 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:15:58.352900 systemd[1]: Started sshd@1-204.168.148.110:22-68.220.241.50:53616.service - OpenSSH per-connection server daemon (68.220.241.50:53616). Mar 14 00:15:58.761282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:15:58.768769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:58.901865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:58.906555 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:58.939822 kubelet[1657]: E0314 00:15:58.939717 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:58.943603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:58.943785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:59.113293 sshd[1647]: Accepted publickey for core from 68.220.241.50 port 53616 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:15:59.114543 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:59.121266 systemd-logind[1490]: New session 2 of user core. Mar 14 00:15:59.130546 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:15:59.644260 sshd[1647]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:59.652146 systemd[1]: sshd@1-204.168.148.110:22-68.220.241.50:53616.service: Deactivated successfully. Mar 14 00:15:59.655992 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:15:59.657363 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:15:59.659774 systemd-logind[1490]: Removed session 2. Mar 14 00:15:59.782689 systemd[1]: Started sshd@2-204.168.148.110:22-68.220.241.50:53628.service - OpenSSH per-connection server daemon (68.220.241.50:53628). Mar 14 00:16:00.535074 sshd[1669]: Accepted publickey for core from 68.220.241.50 port 53628 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:00.537916 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:00.546089 systemd-logind[1490]: New session 3 of user core. Mar 14 00:16:00.553551 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:16:01.057488 sshd[1669]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:01.062533 systemd[1]: sshd@2-204.168.148.110:22-68.220.241.50:53628.service: Deactivated successfully. Mar 14 00:16:01.066019 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:16:01.068568 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:16:01.070664 systemd-logind[1490]: Removed session 3. Mar 14 00:16:01.197680 systemd[1]: Started sshd@3-204.168.148.110:22-68.220.241.50:39646.service - OpenSSH per-connection server daemon (68.220.241.50:39646). Mar 14 00:16:01.957472 sshd[1676]: Accepted publickey for core from 68.220.241.50 port 39646 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:01.960386 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:01.968857 systemd-logind[1490]: New session 4 of user core. Mar 14 00:16:01.975575 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:16:02.489846 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:02.495748 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:16:02.497403 systemd[1]: sshd@3-204.168.148.110:22-68.220.241.50:39646.service: Deactivated successfully. Mar 14 00:16:02.500814 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:16:02.502499 systemd-logind[1490]: Removed session 4. Mar 14 00:16:02.625752 systemd[1]: Started sshd@4-204.168.148.110:22-68.220.241.50:39662.service - OpenSSH per-connection server daemon (68.220.241.50:39662). Mar 14 00:16:03.385724 sshd[1683]: Accepted publickey for core from 68.220.241.50 port 39662 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:03.386965 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:03.394722 systemd-logind[1490]: New session 5 of user core. Mar 14 00:16:03.408594 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:16:03.806451 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:16:03.807154 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:03.825556 sudo[1686]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:03.947479 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:03.952845 systemd[1]: sshd@4-204.168.148.110:22-68.220.241.50:39662.service: Deactivated successfully. Mar 14 00:16:03.956512 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:16:03.959246 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:16:03.961389 systemd-logind[1490]: Removed session 5. Mar 14 00:16:04.082734 systemd[1]: Started sshd@5-204.168.148.110:22-68.220.241.50:39678.service - OpenSSH per-connection server daemon (68.220.241.50:39678). Mar 14 00:16:04.847379 sshd[1691]: Accepted publickey for core from 68.220.241.50 port 39678 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:04.850056 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:04.859250 systemd-logind[1490]: New session 6 of user core. Mar 14 00:16:04.866634 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:16:05.257343 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:16:05.258151 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:05.266107 sudo[1695]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:05.279588 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:16:05.280467 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:05.304692 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:05.322372 auditctl[1698]: No rules Mar 14 00:16:05.323354 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:16:05.323764 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:05.332529 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:05.392475 augenrules[1716]: No rules Mar 14 00:16:05.395172 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:05.397555 sudo[1694]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:05.517760 sshd[1691]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:05.522453 systemd[1]: sshd@5-204.168.148.110:22-68.220.241.50:39678.service: Deactivated successfully. Mar 14 00:16:05.525854 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:16:05.528574 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:16:05.530248 systemd-logind[1490]: Removed session 6. Mar 14 00:16:05.656984 systemd[1]: Started sshd@6-204.168.148.110:22-68.220.241.50:39684.service - OpenSSH per-connection server daemon (68.220.241.50:39684). Mar 14 00:16:06.411716 sshd[1724]: Accepted publickey for core from 68.220.241.50 port 39684 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:06.414400 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:06.422444 systemd-logind[1490]: New session 7 of user core. Mar 14 00:16:06.429610 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:16:06.822553 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:16:06.823283 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:07.226516 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:16:07.240097 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:16:08.697775 systemd-timesyncd[1451]: Contacted time server 213.239.234.28:123 (2.flatcar.pool.ntp.org). Mar 14 00:16:08.697880 systemd-timesyncd[1451]: Initial clock synchronization to Sat 2026-03-14 00:16:08.697340 UTC. Mar 14 00:16:08.698059 systemd-resolved[1406]: Clock change detected. Flushing caches. Mar 14 00:16:08.852383 dockerd[1743]: time="2026-03-14T00:16:08.852264042Z" level=info msg="Starting up" Mar 14 00:16:08.981662 dockerd[1743]: time="2026-03-14T00:16:08.980994079Z" level=info msg="Loading containers: start." Mar 14 00:16:09.128422 kernel: Initializing XFRM netlink socket Mar 14 00:16:09.212963 systemd-networkd[1401]: docker0: Link UP Mar 14 00:16:09.227748 dockerd[1743]: time="2026-03-14T00:16:09.227689244Z" level=info msg="Loading containers: done." Mar 14 00:16:09.246127 dockerd[1743]: time="2026-03-14T00:16:09.245981690Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:16:09.246578 dockerd[1743]: time="2026-03-14T00:16:09.246536740Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:16:09.246820 dockerd[1743]: time="2026-03-14T00:16:09.246782740Z" level=info msg="Daemon has completed initialization" Mar 14 00:16:09.291505 dockerd[1743]: time="2026-03-14T00:16:09.291303887Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:16:09.291847 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:16:10.012191 containerd[1511]: time="2026-03-14T00:16:10.012110708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:16:10.174250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:16:10.188000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:10.394829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:10.406936 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:10.480616 kubelet[1889]: E0314 00:16:10.480536 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:10.486663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:10.487055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:10.632721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994424858.mount: Deactivated successfully. Mar 14 00:16:11.869275 containerd[1511]: time="2026-03-14T00:16:11.869225335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:11.870332 containerd[1511]: time="2026-03-14T00:16:11.870216276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074597" Mar 14 00:16:11.871401 containerd[1511]: time="2026-03-14T00:16:11.871182507Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:11.875988 containerd[1511]: time="2026-03-14T00:16:11.875965261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:11.876698 containerd[1511]: time="2026-03-14T00:16:11.876677401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.864489173s" Mar 14 00:16:11.876748 containerd[1511]: time="2026-03-14T00:16:11.876703151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 14 00:16:11.877250 containerd[1511]: time="2026-03-14T00:16:11.877227172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:16:13.072224 containerd[1511]: time="2026-03-14T00:16:13.072122457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:13.073887 containerd[1511]: time="2026-03-14T00:16:13.073848258Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165845" Mar 14 00:16:13.074878 containerd[1511]: time="2026-03-14T00:16:13.074022309Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:13.076952 containerd[1511]: time="2026-03-14T00:16:13.076884751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:13.077749 containerd[1511]: time="2026-03-14T00:16:13.077597802Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.20034916s" Mar 14 00:16:13.077749 containerd[1511]: time="2026-03-14T00:16:13.077621852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 14 00:16:13.077996 containerd[1511]: time="2026-03-14T00:16:13.077933862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:16:14.081873 containerd[1511]: time="2026-03-14T00:16:14.081823778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:14.082852 containerd[1511]: time="2026-03-14T00:16:14.082667639Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729846" Mar 14 00:16:14.083609 containerd[1511]: time="2026-03-14T00:16:14.083490560Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:14.085638 containerd[1511]: time="2026-03-14T00:16:14.085395121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:14.086259 containerd[1511]: time="2026-03-14T00:16:14.086111852Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.00816055s" Mar 14 00:16:14.086259 containerd[1511]: time="2026-03-14T00:16:14.086136662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 14 00:16:14.086654 containerd[1511]: time="2026-03-14T00:16:14.086636812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:16:15.284349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578771847.mount: Deactivated successfully. Mar 14 00:16:15.625599 containerd[1511]: time="2026-03-14T00:16:15.625462164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:15.626848 containerd[1511]: time="2026-03-14T00:16:15.626750545Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861798" Mar 14 00:16:15.627809 containerd[1511]: time="2026-03-14T00:16:15.627760496Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:15.629723 containerd[1511]: time="2026-03-14T00:16:15.629683338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:15.630651 containerd[1511]: time="2026-03-14T00:16:15.630152258Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.543492736s" Mar 14 00:16:15.630651 containerd[1511]: time="2026-03-14T00:16:15.630196788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 14 00:16:15.630922 containerd[1511]: time="2026-03-14T00:16:15.630897909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:16:16.153108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272924256.mount: Deactivated successfully. Mar 14 00:16:17.341516 containerd[1511]: time="2026-03-14T00:16:17.341451854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.343152 containerd[1511]: time="2026-03-14T00:16:17.342738535Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Mar 14 00:16:17.344407 containerd[1511]: time="2026-03-14T00:16:17.344044036Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.347159 containerd[1511]: time="2026-03-14T00:16:17.347131208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.348254 containerd[1511]: time="2026-03-14T00:16:17.348140189Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.71721811s" Mar 14 00:16:17.348254 containerd[1511]: time="2026-03-14T00:16:17.348162829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 14 00:16:17.349113 containerd[1511]: time="2026-03-14T00:16:17.349066300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:16:17.818053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895194209.mount: Deactivated successfully. Mar 14 00:16:17.825142 containerd[1511]: time="2026-03-14T00:16:17.825081877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.825941 containerd[1511]: time="2026-03-14T00:16:17.825899897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Mar 14 00:16:17.826786 containerd[1511]: time="2026-03-14T00:16:17.826649638Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.828654 containerd[1511]: time="2026-03-14T00:16:17.828619069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:17.829571 containerd[1511]: time="2026-03-14T00:16:17.829400710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 480.28542ms" Mar 14 00:16:17.829571 containerd[1511]: time="2026-03-14T00:16:17.829431790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:16:17.829958 containerd[1511]: time="2026-03-14T00:16:17.829931471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:16:18.308760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35122372.mount: Deactivated successfully. Mar 14 00:16:19.066598 containerd[1511]: time="2026-03-14T00:16:19.066545011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:19.067875 containerd[1511]: time="2026-03-14T00:16:19.067817192Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860762" Mar 14 00:16:19.070153 containerd[1511]: time="2026-03-14T00:16:19.068995873Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:19.072158 containerd[1511]: time="2026-03-14T00:16:19.071877425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:19.073131 containerd[1511]: time="2026-03-14T00:16:19.072783246Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.242825045s" Mar 14 00:16:19.073131 containerd[1511]: time="2026-03-14T00:16:19.072815586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 14 00:16:20.674311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:16:20.683550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:20.824524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:20.829623 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:20.858501 kubelet[2119]: E0314 00:16:20.858465 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:20.861593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:20.861753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:20.956833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:20.965737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:21.020436 systemd[1]: Reloading requested from client PID 2133 ('systemctl') (unit session-7.scope)... Mar 14 00:16:21.020628 systemd[1]: Reloading... Mar 14 00:16:21.135391 zram_generator::config[2179]: No configuration found. Mar 14 00:16:21.221468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:21.286528 systemd[1]: Reloading finished in 263 ms. Mar 14 00:16:21.335789 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:16:21.335877 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:16:21.336165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:21.343818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:21.487130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:21.498735 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:21.546893 kubelet[2226]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:16:21.546893 kubelet[2226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:21.547221 kubelet[2226]: I0314 00:16:21.546962 2226 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:16:21.894511 kubelet[2226]: I0314 00:16:21.894343 2226 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:16:21.894511 kubelet[2226]: I0314 00:16:21.894460 2226 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:21.896740 kubelet[2226]: I0314 00:16:21.896710 2226 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:21.896740 kubelet[2226]: I0314 00:16:21.896724 2226 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:21.896961 kubelet[2226]: I0314 00:16:21.896934 2226 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:16:21.905165 kubelet[2226]: E0314 00:16:21.905125 2226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://204.168.148.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:21.905347 kubelet[2226]: I0314 00:16:21.905324 2226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:21.909114 kubelet[2226]: E0314 00:16:21.909077 2226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:21.909162 kubelet[2226]: I0314 00:16:21.909144 2226 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:21.915803 kubelet[2226]: I0314 00:16:21.915767 2226 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:21.917830 kubelet[2226]: I0314 00:16:21.917770 2226 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:21.918030 kubelet[2226]: I0314 00:16:21.917831 2226 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8ea3e741de","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:21.918093 kubelet[2226]: I0314 00:16:21.918032 2226 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:16:21.918093 kubelet[2226]: I0314 00:16:21.918048 2226 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:16:21.918207 kubelet[2226]: I0314 00:16:21.918188 2226 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:21.920567 kubelet[2226]: I0314 00:16:21.920534 2226 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:21.920680 kubelet[2226]: I0314 00:16:21.920674 2226 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:16:21.920728 kubelet[2226]: I0314 00:16:21.920686 2226 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:21.920728 kubelet[2226]: I0314 00:16:21.920703 2226 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:16:21.920728 kubelet[2226]: I0314 00:16:21.920714 2226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:21.922906 kubelet[2226]: E0314 00:16:21.922788 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.148.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:16:21.922906 kubelet[2226]: E0314 00:16:21.922856 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.148.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8ea3e741de&limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:16:21.924653 kubelet[2226]: I0314 00:16:21.924623 2226 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:21.925474 kubelet[2226]: I0314 00:16:21.925449 2226 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:21.925503 kubelet[2226]: I0314 00:16:21.925495 2226 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:21.926095 kubelet[2226]: W0314 00:16:21.925580 2226 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:16:21.929819 kubelet[2226]: I0314 00:16:21.929807 2226 server.go:1262] "Started kubelet" Mar 14 00:16:21.931069 kubelet[2226]: I0314 00:16:21.930940 2226 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:16:21.935737 kubelet[2226]: E0314 00:16:21.934880 2226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.148.110:6443/api/v1/namespaces/default/events\": dial tcp 204.168.148.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-8ea3e741de.189c8cff6bc575be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-8ea3e741de,UID:ci-4081-3-6-n-8ea3e741de,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8ea3e741de,},FirstTimestamp:2026-03-14 00:16:21.929784766 +0000 UTC m=+0.427493227,LastTimestamp:2026-03-14 00:16:21.929784766 +0000 UTC m=+0.427493227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8ea3e741de,}" Mar 14 00:16:21.935936 kubelet[2226]: I0314 00:16:21.935878 2226 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:21.937206 kubelet[2226]: I0314 00:16:21.937195 2226 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:16:21.937543 kubelet[2226]: E0314 00:16:21.937530 2226 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" Mar 14 00:16:21.938174 kubelet[2226]: I0314 00:16:21.938162 2226 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:21.940127 kubelet[2226]: I0314 00:16:21.939138 2226 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:16:21.941055 kubelet[2226]: I0314 00:16:21.939175 2226 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:21.941134 kubelet[2226]: I0314 00:16:21.941100 2226 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:21.941553 kubelet[2226]: I0314 00:16:21.941527 2226 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:21.943387 kubelet[2226]: I0314 00:16:21.942317 2226 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:21.943504 kubelet[2226]: I0314 00:16:21.943492 2226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:21.947239 kubelet[2226]: E0314 00:16:21.947221 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": dial tcp 204.168.148.110:6443: connect: connection refused" interval="200ms" Mar 14 00:16:21.947853 kubelet[2226]: I0314 00:16:21.947841 2226 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:21.947959 kubelet[2226]: I0314 00:16:21.947948 2226 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:21.949179 kubelet[2226]: E0314 00:16:21.949153 2226 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:21.949331 kubelet[2226]: I0314 00:16:21.949320 2226 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:21.954180 kubelet[2226]: E0314 00:16:21.954140 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.148.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:16:21.970347 kubelet[2226]: I0314 00:16:21.970321 2226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:21.971692 kubelet[2226]: I0314 00:16:21.971678 2226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:21.971773 kubelet[2226]: I0314 00:16:21.971767 2226 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:16:21.971821 kubelet[2226]: I0314 00:16:21.971815 2226 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:16:21.971940 kubelet[2226]: E0314 00:16:21.971927 2226 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:21.980320 kubelet[2226]: E0314 00:16:21.980291 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.148.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:16:21.981574 kubelet[2226]: I0314 00:16:21.981564 2226 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:16:21.981653 kubelet[2226]: I0314 00:16:21.981646 2226 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:21.981695 kubelet[2226]: I0314 00:16:21.981688 2226 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:21.983776 kubelet[2226]: I0314 00:16:21.983756 2226 policy_none.go:49] "None policy: Start" Mar 14 00:16:21.983850 kubelet[2226]: I0314 00:16:21.983843 2226 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:21.983886 kubelet[2226]: I0314 00:16:21.983880 2226 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:21.985496 kubelet[2226]: I0314 00:16:21.985487 2226 policy_none.go:47] "Start" Mar 14 00:16:21.990603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:16:22.002436 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:16:22.007344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:16:22.018967 kubelet[2226]: E0314 00:16:22.018911 2226 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:22.019296 kubelet[2226]: I0314 00:16:22.019237 2226 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:16:22.019402 kubelet[2226]: I0314 00:16:22.019288 2226 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:22.021997 kubelet[2226]: I0314 00:16:22.020356 2226 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:16:22.022136 kubelet[2226]: E0314 00:16:22.022097 2226 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:22.022317 kubelet[2226]: E0314 00:16:22.022278 2226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-8ea3e741de\" not found" Mar 14 00:16:22.094894 systemd[1]: Created slice kubepods-burstable-pod962e826301efe3dd61424c84bde1b9b3.slice - libcontainer container kubepods-burstable-pod962e826301efe3dd61424c84bde1b9b3.slice. Mar 14 00:16:22.106872 kubelet[2226]: E0314 00:16:22.106819 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.112825 systemd[1]: Created slice kubepods-burstable-poddeae2179eb90b9d718bcb98645ac9b97.slice - libcontainer container kubepods-burstable-poddeae2179eb90b9d718bcb98645ac9b97.slice. Mar 14 00:16:22.123660 kubelet[2226]: I0314 00:16:22.123185 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.124713 kubelet[2226]: E0314 00:16:22.124286 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.125320 kubelet[2226]: E0314 00:16:22.125282 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.148.110:6443/api/v1/nodes\": dial tcp 204.168.148.110:6443: connect: connection refused" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.127612 systemd[1]: Created slice kubepods-burstable-pod22d85efd360838aa5e250374be4ff28b.slice - libcontainer container kubepods-burstable-pod22d85efd360838aa5e250374be4ff28b.slice. Mar 14 00:16:22.131915 kubelet[2226]: E0314 00:16:22.131844 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.143239 kubelet[2226]: I0314 00:16:22.143172 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.143239 kubelet[2226]: I0314 00:16:22.143222 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.143239 kubelet[2226]: I0314 00:16:22.143253 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.143239 kubelet[2226]: I0314 00:16:22.143313 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.143239 kubelet[2226]: I0314 00:16:22.143406 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.144233 kubelet[2226]: I0314 00:16:22.143458 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.144233 kubelet[2226]: I0314 00:16:22.143496 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.144233 kubelet[2226]: I0314 00:16:22.143522 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.144233 kubelet[2226]: I0314 00:16:22.143905 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deae2179eb90b9d718bcb98645ac9b97-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8ea3e741de\" (UID: \"deae2179eb90b9d718bcb98645ac9b97\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.148939 kubelet[2226]: E0314 00:16:22.148813 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": dial tcp 204.168.148.110:6443: connect: connection refused" interval="400ms" Mar 14 00:16:22.227020 update_engine[1492]: I20260314 00:16:22.226830 1492 update_attempter.cc:509] Updating boot flags... Mar 14 00:16:22.311445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2271) Mar 14 00:16:22.335054 kubelet[2226]: I0314 00:16:22.334978 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.338582 kubelet[2226]: E0314 00:16:22.337741 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.148.110:6443/api/v1/nodes\": dial tcp 204.168.148.110:6443: connect: connection refused" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.399548 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2271) Mar 14 00:16:22.414098 containerd[1511]: time="2026-03-14T00:16:22.413276039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8ea3e741de,Uid:962e826301efe3dd61424c84bde1b9b3,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:22.431258 containerd[1511]: time="2026-03-14T00:16:22.431224024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8ea3e741de,Uid:deae2179eb90b9d718bcb98645ac9b97,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:22.436038 containerd[1511]: time="2026-03-14T00:16:22.435571067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8ea3e741de,Uid:22d85efd360838aa5e250374be4ff28b,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:22.549384 kubelet[2226]: E0314 00:16:22.549334 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": dial tcp 204.168.148.110:6443: connect: connection refused" interval="800ms" Mar 14 00:16:22.741394 kubelet[2226]: I0314 00:16:22.741319 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.741824 kubelet[2226]: E0314 00:16:22.741755 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.148.110:6443/api/v1/nodes\": dial tcp 204.168.148.110:6443: connect: connection refused" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:22.824951 kubelet[2226]: E0314 00:16:22.824878 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.148.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:16:22.872651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824961954.mount: Deactivated successfully. Mar 14 00:16:22.881415 containerd[1511]: time="2026-03-14T00:16:22.881281979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:22.882783 containerd[1511]: time="2026-03-14T00:16:22.882725600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:22.884307 containerd[1511]: time="2026-03-14T00:16:22.884233201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:22.884850 containerd[1511]: time="2026-03-14T00:16:22.884781582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:22.885875 containerd[1511]: time="2026-03-14T00:16:22.885816572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:22.887653 containerd[1511]: time="2026-03-14T00:16:22.887588624Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:22.888129 containerd[1511]: time="2026-03-14T00:16:22.888055524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 14 00:16:22.890682 containerd[1511]: time="2026-03-14T00:16:22.890616686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:22.895694 containerd[1511]: time="2026-03-14T00:16:22.895440970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.145676ms" Mar 14 00:16:22.898342 containerd[1511]: time="2026-03-14T00:16:22.898280473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.927474ms" Mar 14 00:16:22.901639 containerd[1511]: time="2026-03-14T00:16:22.901583686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.950229ms" Mar 14 00:16:22.928642 kubelet[2226]: E0314 00:16:22.925949 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.148.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8ea3e741de&limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:16:22.992420 kubelet[2226]: E0314 00:16:22.992096 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.148.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:16:23.056011 containerd[1511]: time="2026-03-14T00:16:23.055868514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:23.056610 containerd[1511]: time="2026-03-14T00:16:23.056350465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:23.056610 containerd[1511]: time="2026-03-14T00:16:23.056549265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.067890 containerd[1511]: time="2026-03-14T00:16:23.067813584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.074409 containerd[1511]: time="2026-03-14T00:16:23.073137869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:23.074409 containerd[1511]: time="2026-03-14T00:16:23.073203169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:23.074409 containerd[1511]: time="2026-03-14T00:16:23.073218689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.074409 containerd[1511]: time="2026-03-14T00:16:23.073331359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.075615 containerd[1511]: time="2026-03-14T00:16:23.075514490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:23.075717 containerd[1511]: time="2026-03-14T00:16:23.075618981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:23.075717 containerd[1511]: time="2026-03-14T00:16:23.075671511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.075873 containerd[1511]: time="2026-03-14T00:16:23.075816401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:23.108822 systemd[1]: Started cri-containerd-3f32ae8727eb273e12e561439a2eb0566a929c99fcc63f18c577bb27953e56d1.scope - libcontainer container 3f32ae8727eb273e12e561439a2eb0566a929c99fcc63f18c577bb27953e56d1. Mar 14 00:16:23.136475 systemd[1]: Started cri-containerd-933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5.scope - libcontainer container 933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5. Mar 14 00:16:23.140695 systemd[1]: Started cri-containerd-d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318.scope - libcontainer container d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318. Mar 14 00:16:23.192250 containerd[1511]: time="2026-03-14T00:16:23.191925877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8ea3e741de,Uid:22d85efd360838aa5e250374be4ff28b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f32ae8727eb273e12e561439a2eb0566a929c99fcc63f18c577bb27953e56d1\"" Mar 14 00:16:23.199616 containerd[1511]: time="2026-03-14T00:16:23.199569604Z" level=info msg="CreateContainer within sandbox \"3f32ae8727eb273e12e561439a2eb0566a929c99fcc63f18c577bb27953e56d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:16:23.209135 containerd[1511]: time="2026-03-14T00:16:23.208759202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8ea3e741de,Uid:deae2179eb90b9d718bcb98645ac9b97,Namespace:kube-system,Attempt:0,} returns sandbox id \"933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5\"" Mar 14 00:16:23.216835 containerd[1511]: time="2026-03-14T00:16:23.216736138Z" level=info msg="CreateContainer within sandbox \"933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:16:23.224547 containerd[1511]: time="2026-03-14T00:16:23.224503485Z" level=info msg="CreateContainer within sandbox \"3f32ae8727eb273e12e561439a2eb0566a929c99fcc63f18c577bb27953e56d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"124c8d4c13f227badfc3286efa5832da69dfc0a0bd86a0227b8acf6a5c2369f8\"" Mar 14 00:16:23.226493 containerd[1511]: time="2026-03-14T00:16:23.226413046Z" level=info msg="StartContainer for \"124c8d4c13f227badfc3286efa5832da69dfc0a0bd86a0227b8acf6a5c2369f8\"" Mar 14 00:16:23.233947 containerd[1511]: time="2026-03-14T00:16:23.233906662Z" level=info msg="CreateContainer within sandbox \"933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5\"" Mar 14 00:16:23.234754 containerd[1511]: time="2026-03-14T00:16:23.234606923Z" level=info msg="StartContainer for \"038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5\"" Mar 14 00:16:23.236396 containerd[1511]: time="2026-03-14T00:16:23.236203524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8ea3e741de,Uid:962e826301efe3dd61424c84bde1b9b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318\"" Mar 14 00:16:23.240876 containerd[1511]: time="2026-03-14T00:16:23.240699438Z" level=info msg="CreateContainer within sandbox \"d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:16:23.261440 containerd[1511]: time="2026-03-14T00:16:23.260772585Z" level=info msg="CreateContainer within sandbox \"d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd\"" Mar 14 00:16:23.261958 containerd[1511]: time="2026-03-14T00:16:23.261936076Z" level=info msg="StartContainer for \"ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd\"" Mar 14 00:16:23.268491 systemd[1]: Started cri-containerd-038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5.scope - libcontainer container 038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5. Mar 14 00:16:23.271962 systemd[1]: Started cri-containerd-124c8d4c13f227badfc3286efa5832da69dfc0a0bd86a0227b8acf6a5c2369f8.scope - libcontainer container 124c8d4c13f227badfc3286efa5832da69dfc0a0bd86a0227b8acf6a5c2369f8. Mar 14 00:16:23.302410 kubelet[2226]: E0314 00:16:23.302345 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.148.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.148.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:16:23.304552 systemd[1]: Started cri-containerd-ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd.scope - libcontainer container ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd. Mar 14 00:16:23.317456 containerd[1511]: time="2026-03-14T00:16:23.317351942Z" level=info msg="StartContainer for \"124c8d4c13f227badfc3286efa5832da69dfc0a0bd86a0227b8acf6a5c2369f8\" returns successfully" Mar 14 00:16:23.350861 kubelet[2226]: E0314 00:16:23.350654 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": dial tcp 204.168.148.110:6443: connect: connection refused" interval="1.6s" Mar 14 00:16:23.354290 containerd[1511]: time="2026-03-14T00:16:23.353879642Z" level=info msg="StartContainer for \"038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5\" returns successfully" Mar 14 00:16:23.362495 containerd[1511]: time="2026-03-14T00:16:23.362470600Z" level=info msg="StartContainer for \"ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd\" returns successfully" Mar 14 00:16:23.543851 kubelet[2226]: I0314 00:16:23.543565 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:23.991432 kubelet[2226]: E0314 00:16:23.990477 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:23.992534 kubelet[2226]: E0314 00:16:23.991959 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:23.994861 kubelet[2226]: E0314 00:16:23.994771 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:24.925393 kubelet[2226]: I0314 00:16:24.925276 2226 apiserver.go:52] "Watching apiserver" Mar 14 00:16:24.940510 kubelet[2226]: I0314 00:16:24.940477 2226 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:16:24.955682 kubelet[2226]: E0314 00:16:24.955642 2226 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:24.995768 kubelet[2226]: E0314 00:16:24.995447 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:24.995768 kubelet[2226]: E0314 00:16:24.995680 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8ea3e741de\" not found" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.067826 kubelet[2226]: I0314 00:16:25.065547 2226 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.067826 kubelet[2226]: E0314 00:16:25.065576 2226 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-8ea3e741de\": node \"ci-4081-3-6-n-8ea3e741de\" not found" Mar 14 00:16:25.138802 kubelet[2226]: I0314 00:16:25.138755 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.155026 kubelet[2226]: E0314 00:16:25.154878 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.155026 kubelet[2226]: I0314 00:16:25.154904 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.156062 kubelet[2226]: E0314 00:16:25.155937 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8ea3e741de\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.156062 kubelet[2226]: I0314 00:16:25.155951 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:25.161551 kubelet[2226]: E0314 00:16:25.161531 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:26.977211 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-7.scope)... Mar 14 00:16:26.977243 systemd[1]: Reloading... Mar 14 00:16:27.158388 zram_generator::config[2576]: No configuration found. Mar 14 00:16:27.239827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:27.319461 systemd[1]: Reloading finished in 341 ms. Mar 14 00:16:27.371581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:27.395867 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:27.396202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:27.402899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:27.527469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:27.532628 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:27.563124 kubelet[2621]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:16:27.563124 kubelet[2621]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:27.564119 kubelet[2621]: I0314 00:16:27.563981 2621 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:16:27.568753 kubelet[2621]: I0314 00:16:27.568725 2621 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:16:27.568753 kubelet[2621]: I0314 00:16:27.568743 2621 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:27.568846 kubelet[2621]: I0314 00:16:27.568766 2621 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:27.568846 kubelet[2621]: I0314 00:16:27.568775 2621 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:27.569079 kubelet[2621]: I0314 00:16:27.568901 2621 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:16:27.569720 kubelet[2621]: I0314 00:16:27.569696 2621 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:16:27.571349 kubelet[2621]: I0314 00:16:27.571172 2621 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:27.579687 kubelet[2621]: E0314 00:16:27.579533 2621 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:27.579687 kubelet[2621]: I0314 00:16:27.579612 2621 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:27.583294 kubelet[2621]: I0314 00:16:27.583277 2621 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:27.583824 kubelet[2621]: I0314 00:16:27.583627 2621 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:27.583824 kubelet[2621]: I0314 00:16:27.583651 2621 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8ea3e741de","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:27.583824 kubelet[2621]: I0314 00:16:27.583769 2621 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:16:27.583824 kubelet[2621]: I0314 00:16:27.583776 2621 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:16:27.583989 kubelet[2621]: I0314 00:16:27.583796 2621 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:27.584250 kubelet[2621]: I0314 00:16:27.584241 2621 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:27.584488 kubelet[2621]: I0314 00:16:27.584480 2621 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:16:27.584555 kubelet[2621]: I0314 00:16:27.584548 2621 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:27.584661 kubelet[2621]: I0314 00:16:27.584601 2621 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:16:27.584697 kubelet[2621]: I0314 00:16:27.584690 2621 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:27.587735 kubelet[2621]: I0314 00:16:27.587720 2621 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:27.588187 kubelet[2621]: I0314 00:16:27.588174 2621 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:27.588242 kubelet[2621]: I0314 00:16:27.588235 2621 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:27.592431 kubelet[2621]: I0314 00:16:27.592350 2621 server.go:1262] "Started kubelet" Mar 14 00:16:27.593008 kubelet[2621]: I0314 00:16:27.592966 2621 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:27.594312 kubelet[2621]: I0314 00:16:27.593116 2621 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:27.594312 kubelet[2621]: I0314 00:16:27.593147 2621 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:27.594312 kubelet[2621]: I0314 00:16:27.593387 2621 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:27.594673 kubelet[2621]: I0314 00:16:27.594647 2621 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:16:27.596638 kubelet[2621]: I0314 00:16:27.596172 2621 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:16:27.599475 kubelet[2621]: I0314 00:16:27.598842 2621 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:27.607383 kubelet[2621]: I0314 00:16:27.604161 2621 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:16:27.608044 kubelet[2621]: I0314 00:16:27.607535 2621 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:27.608044 kubelet[2621]: I0314 00:16:27.607640 2621 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:27.609659 kubelet[2621]: I0314 00:16:27.609632 2621 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:27.609760 kubelet[2621]: I0314 00:16:27.609722 2621 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:27.611337 kubelet[2621]: E0314 00:16:27.611301 2621 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:27.611537 kubelet[2621]: I0314 00:16:27.611513 2621 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:27.613120 kubelet[2621]: I0314 00:16:27.613085 2621 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:27.613783 kubelet[2621]: I0314 00:16:27.613507 2621 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:27.613783 kubelet[2621]: I0314 00:16:27.613520 2621 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:16:27.613783 kubelet[2621]: I0314 00:16:27.613535 2621 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:16:27.613783 kubelet[2621]: E0314 00:16:27.613662 2621 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:27.654544 kubelet[2621]: I0314 00:16:27.654515 2621 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:16:27.654544 kubelet[2621]: I0314 00:16:27.654531 2621 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:27.654544 kubelet[2621]: I0314 00:16:27.654547 2621 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:27.654689 kubelet[2621]: I0314 00:16:27.654677 2621 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:16:27.654708 kubelet[2621]: I0314 00:16:27.654684 2621 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:16:27.654708 kubelet[2621]: I0314 00:16:27.654698 2621 policy_none.go:49] "None policy: Start" Mar 14 00:16:27.654708 kubelet[2621]: I0314 00:16:27.654706 2621 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:27.654767 kubelet[2621]: I0314 00:16:27.654714 2621 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:27.655176 kubelet[2621]: I0314 00:16:27.654794 2621 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:16:27.655176 kubelet[2621]: I0314 00:16:27.654819 2621 policy_none.go:47] "Start" Mar 14 00:16:27.658494 kubelet[2621]: E0314 00:16:27.658448 2621 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:27.658625 kubelet[2621]: I0314 00:16:27.658606 2621 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:16:27.658656 kubelet[2621]: I0314 00:16:27.658619 2621 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:27.659546 kubelet[2621]: I0314 00:16:27.659303 2621 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:16:27.660137 kubelet[2621]: E0314 00:16:27.660068 2621 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:27.715008 kubelet[2621]: I0314 00:16:27.714940 2621 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.717092 kubelet[2621]: I0314 00:16:27.714958 2621 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.717092 kubelet[2621]: I0314 00:16:27.716845 2621 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.765347 kubelet[2621]: I0314 00:16:27.765076 2621 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.775208 kubelet[2621]: I0314 00:16:27.774566 2621 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.775208 kubelet[2621]: I0314 00:16:27.774863 2621 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910042 kubelet[2621]: I0314 00:16:27.909494 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910042 kubelet[2621]: I0314 00:16:27.909574 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910042 kubelet[2621]: I0314 00:16:27.909659 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910042 kubelet[2621]: I0314 00:16:27.909715 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deae2179eb90b9d718bcb98645ac9b97-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8ea3e741de\" (UID: \"deae2179eb90b9d718bcb98645ac9b97\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910042 kubelet[2621]: I0314 00:16:27.909756 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910435 kubelet[2621]: I0314 00:16:27.909780 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910435 kubelet[2621]: I0314 00:16:27.909846 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d85efd360838aa5e250374be4ff28b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" (UID: \"22d85efd360838aa5e250374be4ff28b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910435 kubelet[2621]: I0314 00:16:27.909868 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.910435 kubelet[2621]: I0314 00:16:27.909889 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/962e826301efe3dd61424c84bde1b9b3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" (UID: \"962e826301efe3dd61424c84bde1b9b3\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:27.979483 sudo[2659]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:16:27.980247 sudo[2659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:16:28.585707 kubelet[2621]: I0314 00:16:28.585634 2621 apiserver.go:52] "Watching apiserver" Mar 14 00:16:28.602600 sudo[2659]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:28.608675 kubelet[2621]: I0314 00:16:28.608574 2621 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:16:28.645950 kubelet[2621]: I0314 00:16:28.645710 2621 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:28.648970 kubelet[2621]: I0314 00:16:28.648648 2621 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:28.659517 kubelet[2621]: E0314 00:16:28.659123 2621 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8ea3e741de\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:28.665901 kubelet[2621]: E0314 00:16:28.665832 2621 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8ea3e741de\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" Mar 14 00:16:28.705418 kubelet[2621]: I0314 00:16:28.705033 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8ea3e741de" podStartSLOduration=1.70500461 podStartE2EDuration="1.70500461s" podCreationTimestamp="2026-03-14 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:28.682673482 +0000 UTC m=+1.147064617" watchObservedRunningTime="2026-03-14 00:16:28.70500461 +0000 UTC m=+1.169395695" Mar 14 00:16:28.727822 kubelet[2621]: I0314 00:16:28.727588 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8ea3e741de" podStartSLOduration=1.7275659490000002 podStartE2EDuration="1.727565949s" podCreationTimestamp="2026-03-14 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:28.708048243 +0000 UTC m=+1.172439328" watchObservedRunningTime="2026-03-14 00:16:28.727565949 +0000 UTC m=+1.191957044" Mar 14 00:16:28.749813 kubelet[2621]: I0314 00:16:28.749232 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8ea3e741de" podStartSLOduration=1.7492090070000001 podStartE2EDuration="1.749209007s" podCreationTimestamp="2026-03-14 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:28.72914379 +0000 UTC m=+1.193534875" watchObservedRunningTime="2026-03-14 00:16:28.749209007 +0000 UTC m=+1.213600092" Mar 14 00:16:30.104137 sudo[1727]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:30.224623 sshd[1724]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:30.234074 systemd[1]: sshd@6-204.168.148.110:22-68.220.241.50:39684.service: Deactivated successfully. Mar 14 00:16:30.239198 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:16:30.239551 systemd[1]: session-7.scope: Consumed 4.656s CPU time, 158.0M memory peak, 0B memory swap peak. Mar 14 00:16:30.240307 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:16:30.241928 systemd-logind[1490]: Removed session 7. Mar 14 00:16:32.083751 kubelet[2621]: I0314 00:16:32.083710 2621 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:16:32.084239 containerd[1511]: time="2026-03-14T00:16:32.084050675Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:16:32.084568 kubelet[2621]: I0314 00:16:32.084256 2621 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:16:32.738711 systemd[1]: Created slice kubepods-besteffort-pod8b6a6466_d1be_4a0b_b344_4e3758389ce9.slice - libcontainer container kubepods-besteffort-pod8b6a6466_d1be_4a0b_b344_4e3758389ce9.slice. Mar 14 00:16:32.743437 kubelet[2621]: I0314 00:16:32.740294 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b6a6466-d1be-4a0b-b344-4e3758389ce9-kube-proxy\") pod \"kube-proxy-djf7c\" (UID: \"8b6a6466-d1be-4a0b-b344-4e3758389ce9\") " pod="kube-system/kube-proxy-djf7c" Mar 14 00:16:32.743437 kubelet[2621]: I0314 00:16:32.740450 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b6a6466-d1be-4a0b-b344-4e3758389ce9-xtables-lock\") pod \"kube-proxy-djf7c\" (UID: \"8b6a6466-d1be-4a0b-b344-4e3758389ce9\") " pod="kube-system/kube-proxy-djf7c" Mar 14 00:16:32.743437 kubelet[2621]: I0314 00:16:32.740492 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b6a6466-d1be-4a0b-b344-4e3758389ce9-lib-modules\") pod \"kube-proxy-djf7c\" (UID: \"8b6a6466-d1be-4a0b-b344-4e3758389ce9\") " pod="kube-system/kube-proxy-djf7c" Mar 14 00:16:32.743437 kubelet[2621]: I0314 00:16:32.740533 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q49hn\" (UniqueName: \"kubernetes.io/projected/8b6a6466-d1be-4a0b-b344-4e3758389ce9-kube-api-access-q49hn\") pod \"kube-proxy-djf7c\" (UID: \"8b6a6466-d1be-4a0b-b344-4e3758389ce9\") " pod="kube-system/kube-proxy-djf7c" Mar 14 00:16:32.764856 systemd[1]: Created slice kubepods-burstable-podf1560b43_82f7_40a9_ba90_539323b979cb.slice - libcontainer container kubepods-burstable-podf1560b43_82f7_40a9_ba90_539323b979cb.slice. Mar 14 00:16:32.841768 kubelet[2621]: I0314 00:16:32.841712 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-hubble-tls\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842013 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cni-path\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842046 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-etc-cni-netd\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842070 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1560b43-82f7-40a9-ba90-539323b979cb-clustermesh-secrets\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842095 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-net\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842136 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-run\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843025 kubelet[2621]: I0314 00:16:32.842158 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbh9\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-kube-api-access-csbh9\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842207 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-hostproc\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842229 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-cgroup\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842253 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-config-path\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842278 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-bpf-maps\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842300 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-lib-modules\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843485 kubelet[2621]: I0314 00:16:32.842442 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-xtables-lock\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.843721 kubelet[2621]: I0314 00:16:32.842487 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-kernel\") pod \"cilium-mpm9d\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " pod="kube-system/cilium-mpm9d" Mar 14 00:16:32.848170 kubelet[2621]: E0314 00:16:32.848121 2621 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 14 00:16:32.848170 kubelet[2621]: E0314 00:16:32.848169 2621 projected.go:196] Error preparing data for projected volume kube-api-access-q49hn for pod kube-system/kube-proxy-djf7c: configmap "kube-root-ca.crt" not found Mar 14 00:16:32.848317 kubelet[2621]: E0314 00:16:32.848271 2621 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b6a6466-d1be-4a0b-b344-4e3758389ce9-kube-api-access-q49hn podName:8b6a6466-d1be-4a0b-b344-4e3758389ce9 nodeName:}" failed. No retries permitted until 2026-03-14 00:16:33.348232642 +0000 UTC m=+5.812623737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q49hn" (UniqueName: "kubernetes.io/projected/8b6a6466-d1be-4a0b-b344-4e3758389ce9-kube-api-access-q49hn") pod "kube-proxy-djf7c" (UID: "8b6a6466-d1be-4a0b-b344-4e3758389ce9") : configmap "kube-root-ca.crt" not found Mar 14 00:16:33.080409 containerd[1511]: time="2026-03-14T00:16:33.080256145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mpm9d,Uid:f1560b43-82f7-40a9-ba90-539323b979cb,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:33.130743 containerd[1511]: time="2026-03-14T00:16:33.130427127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:33.130743 containerd[1511]: time="2026-03-14T00:16:33.130563147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:33.130743 containerd[1511]: time="2026-03-14T00:16:33.130629167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.131849 containerd[1511]: time="2026-03-14T00:16:33.131664088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.170584 systemd[1]: Started cri-containerd-222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12.scope - libcontainer container 222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12. Mar 14 00:16:33.191999 containerd[1511]: time="2026-03-14T00:16:33.191951528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mpm9d,Uid:f1560b43-82f7-40a9-ba90-539323b979cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\"" Mar 14 00:16:33.194029 containerd[1511]: time="2026-03-14T00:16:33.193988370Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:16:33.226772 systemd[1]: Created slice kubepods-besteffort-pod9d11115d_b9eb_430a_b2a9_ef8cddd745cf.slice - libcontainer container kubepods-besteffort-pod9d11115d_b9eb_430a_b2a9_ef8cddd745cf.slice. Mar 14 00:16:33.245711 kubelet[2621]: I0314 00:16:33.245667 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-prsqt\" (UID: \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\") " pod="kube-system/cilium-operator-6f9c7c5859-prsqt" Mar 14 00:16:33.246102 kubelet[2621]: I0314 00:16:33.245727 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67kl\" (UniqueName: \"kubernetes.io/projected/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-kube-api-access-d67kl\") pod \"cilium-operator-6f9c7c5859-prsqt\" (UID: \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\") " pod="kube-system/cilium-operator-6f9c7c5859-prsqt" Mar 14 00:16:33.534264 containerd[1511]: time="2026-03-14T00:16:33.534202363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-prsqt,Uid:9d11115d-b9eb-430a-b2a9-ef8cddd745cf,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:33.570476 containerd[1511]: time="2026-03-14T00:16:33.570292753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:33.574441 containerd[1511]: time="2026-03-14T00:16:33.572502755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:33.574441 containerd[1511]: time="2026-03-14T00:16:33.572534705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.574441 containerd[1511]: time="2026-03-14T00:16:33.572641995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.615009 systemd[1]: Started cri-containerd-d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62.scope - libcontainer container d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62. Mar 14 00:16:33.655135 containerd[1511]: time="2026-03-14T00:16:33.654898924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djf7c,Uid:8b6a6466-d1be-4a0b-b344-4e3758389ce9,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:33.669792 containerd[1511]: time="2026-03-14T00:16:33.669713446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-prsqt,Uid:9d11115d-b9eb-430a-b2a9-ef8cddd745cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\"" Mar 14 00:16:33.701461 containerd[1511]: time="2026-03-14T00:16:33.701101912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:33.701461 containerd[1511]: time="2026-03-14T00:16:33.701202052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:33.701461 containerd[1511]: time="2026-03-14T00:16:33.701218712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.701771 containerd[1511]: time="2026-03-14T00:16:33.701699913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.730680 systemd[1]: Started cri-containerd-03532886c07e1e14e264c3c93ef10b1f9e4d4ac18ad4c7a72c8b8e628cbdf541.scope - libcontainer container 03532886c07e1e14e264c3c93ef10b1f9e4d4ac18ad4c7a72c8b8e628cbdf541. Mar 14 00:16:33.769913 containerd[1511]: time="2026-03-14T00:16:33.769820470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djf7c,Uid:8b6a6466-d1be-4a0b-b344-4e3758389ce9,Namespace:kube-system,Attempt:0,} returns sandbox id \"03532886c07e1e14e264c3c93ef10b1f9e4d4ac18ad4c7a72c8b8e628cbdf541\"" Mar 14 00:16:33.777028 containerd[1511]: time="2026-03-14T00:16:33.776968076Z" level=info msg="CreateContainer within sandbox \"03532886c07e1e14e264c3c93ef10b1f9e4d4ac18ad4c7a72c8b8e628cbdf541\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:16:33.793735 containerd[1511]: time="2026-03-14T00:16:33.793607629Z" level=info msg="CreateContainer within sandbox \"03532886c07e1e14e264c3c93ef10b1f9e4d4ac18ad4c7a72c8b8e628cbdf541\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"078c3e94b71f2c2c51add00755a0b70a7715f51387eb8241e18ed6fce470b7bb\"" Mar 14 00:16:33.796498 containerd[1511]: time="2026-03-14T00:16:33.795593491Z" level=info msg="StartContainer for \"078c3e94b71f2c2c51add00755a0b70a7715f51387eb8241e18ed6fce470b7bb\"" Mar 14 00:16:33.819541 systemd[1]: Started cri-containerd-078c3e94b71f2c2c51add00755a0b70a7715f51387eb8241e18ed6fce470b7bb.scope - libcontainer container 078c3e94b71f2c2c51add00755a0b70a7715f51387eb8241e18ed6fce470b7bb. Mar 14 00:16:33.854134 containerd[1511]: time="2026-03-14T00:16:33.854072970Z" level=info msg="StartContainer for \"078c3e94b71f2c2c51add00755a0b70a7715f51387eb8241e18ed6fce470b7bb\" returns successfully" Mar 14 00:16:34.676089 kubelet[2621]: I0314 00:16:34.676000 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-djf7c" podStartSLOduration=2.675978234 podStartE2EDuration="2.675978234s" podCreationTimestamp="2026-03-14 00:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:34.673624253 +0000 UTC m=+7.138015348" watchObservedRunningTime="2026-03-14 00:16:34.675978234 +0000 UTC m=+7.140369319" Mar 14 00:16:37.451497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000648930.mount: Deactivated successfully. Mar 14 00:16:38.923639 containerd[1511]: time="2026-03-14T00:16:38.923572913Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:38.924615 containerd[1511]: time="2026-03-14T00:16:38.924567474Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:16:38.925503 containerd[1511]: time="2026-03-14T00:16:38.925327444Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:38.926565 containerd[1511]: time="2026-03-14T00:16:38.926380155Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.732333165s" Mar 14 00:16:38.926565 containerd[1511]: time="2026-03-14T00:16:38.926416655Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:16:38.928591 containerd[1511]: time="2026-03-14T00:16:38.928325257Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:16:38.931376 containerd[1511]: time="2026-03-14T00:16:38.931322059Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:16:38.946351 containerd[1511]: time="2026-03-14T00:16:38.946296902Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\"" Mar 14 00:16:38.947586 containerd[1511]: time="2026-03-14T00:16:38.947524153Z" level=info msg="StartContainer for \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\"" Mar 14 00:16:38.975128 systemd[1]: run-containerd-runc-k8s.io-15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add-runc.MZTM0j.mount: Deactivated successfully. Mar 14 00:16:38.985698 systemd[1]: Started cri-containerd-15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add.scope - libcontainer container 15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add. Mar 14 00:16:39.014721 containerd[1511]: time="2026-03-14T00:16:39.014593939Z" level=info msg="StartContainer for \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\" returns successfully" Mar 14 00:16:39.027580 systemd[1]: cri-containerd-15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add.scope: Deactivated successfully. Mar 14 00:16:39.237021 containerd[1511]: time="2026-03-14T00:16:39.236922684Z" level=info msg="shim disconnected" id=15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add namespace=k8s.io Mar 14 00:16:39.237021 containerd[1511]: time="2026-03-14T00:16:39.236998834Z" level=warning msg="cleaning up after shim disconnected" id=15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add namespace=k8s.io Mar 14 00:16:39.237021 containerd[1511]: time="2026-03-14T00:16:39.237015454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:39.686955 containerd[1511]: time="2026-03-14T00:16:39.686891839Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:16:39.701772 containerd[1511]: time="2026-03-14T00:16:39.701705891Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\"" Mar 14 00:16:39.702638 containerd[1511]: time="2026-03-14T00:16:39.702541782Z" level=info msg="StartContainer for \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\"" Mar 14 00:16:39.737492 systemd[1]: Started cri-containerd-afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110.scope - libcontainer container afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110. Mar 14 00:16:39.761396 containerd[1511]: time="2026-03-14T00:16:39.761218771Z" level=info msg="StartContainer for \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\" returns successfully" Mar 14 00:16:39.773116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:16:39.773566 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:39.773655 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:39.780348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:39.780539 systemd[1]: cri-containerd-afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110.scope: Deactivated successfully. Mar 14 00:16:39.797657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:39.808177 containerd[1511]: time="2026-03-14T00:16:39.808134980Z" level=info msg="shim disconnected" id=afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110 namespace=k8s.io Mar 14 00:16:39.808380 containerd[1511]: time="2026-03-14T00:16:39.808327980Z" level=warning msg="cleaning up after shim disconnected" id=afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110 namespace=k8s.io Mar 14 00:16:39.808380 containerd[1511]: time="2026-03-14T00:16:39.808340990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:39.942161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add-rootfs.mount: Deactivated successfully. Mar 14 00:16:40.690749 containerd[1511]: time="2026-03-14T00:16:40.690575165Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:16:40.723904 containerd[1511]: time="2026-03-14T00:16:40.723700933Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\"" Mar 14 00:16:40.728404 containerd[1511]: time="2026-03-14T00:16:40.726607885Z" level=info msg="StartContainer for \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\"" Mar 14 00:16:40.796672 systemd[1]: Started cri-containerd-d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e.scope - libcontainer container d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e. Mar 14 00:16:40.853574 containerd[1511]: time="2026-03-14T00:16:40.853460721Z" level=info msg="StartContainer for \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\" returns successfully" Mar 14 00:16:40.859127 systemd[1]: cri-containerd-d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e.scope: Deactivated successfully. Mar 14 00:16:40.894678 containerd[1511]: time="2026-03-14T00:16:40.894585655Z" level=info msg="shim disconnected" id=d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e namespace=k8s.io Mar 14 00:16:40.894678 containerd[1511]: time="2026-03-14T00:16:40.894661105Z" level=warning msg="cleaning up after shim disconnected" id=d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e namespace=k8s.io Mar 14 00:16:40.894678 containerd[1511]: time="2026-03-14T00:16:40.894680345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:40.944784 systemd[1]: run-containerd-runc-k8s.io-d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e-runc.u3nWc0.mount: Deactivated successfully. Mar 14 00:16:40.945662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e-rootfs.mount: Deactivated successfully. Mar 14 00:16:41.278548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656680075.mount: Deactivated successfully. Mar 14 00:16:41.695804 containerd[1511]: time="2026-03-14T00:16:41.695698892Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:16:41.720315 containerd[1511]: time="2026-03-14T00:16:41.720248976Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\"" Mar 14 00:16:41.722493 containerd[1511]: time="2026-03-14T00:16:41.722289111Z" level=info msg="StartContainer for \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\"" Mar 14 00:16:41.765551 systemd[1]: Started cri-containerd-522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e.scope - libcontainer container 522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e. Mar 14 00:16:41.806922 systemd[1]: cri-containerd-522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e.scope: Deactivated successfully. Mar 14 00:16:41.811646 containerd[1511]: time="2026-03-14T00:16:41.811358967Z" level=info msg="StartContainer for \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\" returns successfully" Mar 14 00:16:41.839872 containerd[1511]: time="2026-03-14T00:16:41.839695358Z" level=info msg="shim disconnected" id=522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e namespace=k8s.io Mar 14 00:16:41.839872 containerd[1511]: time="2026-03-14T00:16:41.839749117Z" level=warning msg="cleaning up after shim disconnected" id=522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e namespace=k8s.io Mar 14 00:16:41.839872 containerd[1511]: time="2026-03-14T00:16:41.839756176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:42.628497 containerd[1511]: time="2026-03-14T00:16:42.628093370Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:42.629114 containerd[1511]: time="2026-03-14T00:16:42.629067856Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:16:42.629955 containerd[1511]: time="2026-03-14T00:16:42.629922775Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:42.631033 containerd[1511]: time="2026-03-14T00:16:42.630955759Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.702600242s" Mar 14 00:16:42.631033 containerd[1511]: time="2026-03-14T00:16:42.630978709Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:16:42.634958 containerd[1511]: time="2026-03-14T00:16:42.634924149Z" level=info msg="CreateContainer within sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:16:42.653379 containerd[1511]: time="2026-03-14T00:16:42.653337548Z" level=info msg="CreateContainer within sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\"" Mar 14 00:16:42.655076 containerd[1511]: time="2026-03-14T00:16:42.654155278Z" level=info msg="StartContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\"" Mar 14 00:16:42.687474 systemd[1]: Started cri-containerd-615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0.scope - libcontainer container 615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0. Mar 14 00:16:42.697907 containerd[1511]: time="2026-03-14T00:16:42.697329307Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:16:42.719907 containerd[1511]: time="2026-03-14T00:16:42.719839243Z" level=info msg="CreateContainer within sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\"" Mar 14 00:16:42.720667 containerd[1511]: time="2026-03-14T00:16:42.720621483Z" level=info msg="StartContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\"" Mar 14 00:16:42.746792 containerd[1511]: time="2026-03-14T00:16:42.746746059Z" level=info msg="StartContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" returns successfully" Mar 14 00:16:42.750987 systemd[1]: Started cri-containerd-04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1.scope - libcontainer container 04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1. Mar 14 00:16:42.791110 containerd[1511]: time="2026-03-14T00:16:42.791052710Z" level=info msg="StartContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" returns successfully" Mar 14 00:16:42.937144 kubelet[2621]: I0314 00:16:42.937109 2621 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:16:42.942774 systemd[1]: run-containerd-runc-k8s.io-615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0-runc.d320qZ.mount: Deactivated successfully. Mar 14 00:16:43.008996 systemd[1]: Created slice kubepods-burstable-podc1491e8a_9740_4437_bec3_76398aa37fb8.slice - libcontainer container kubepods-burstable-podc1491e8a_9740_4437_bec3_76398aa37fb8.slice. Mar 14 00:16:43.015337 systemd[1]: Created slice kubepods-burstable-poddb725e06_562b_42a1_a9a6_f123b80276f5.slice - libcontainer container kubepods-burstable-poddb725e06_562b_42a1_a9a6_f123b80276f5.slice. Mar 14 00:16:43.017549 kubelet[2621]: I0314 00:16:43.016634 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8n4\" (UniqueName: \"kubernetes.io/projected/c1491e8a-9740-4437-bec3-76398aa37fb8-kube-api-access-dj8n4\") pod \"coredns-66bc5c9577-b74vk\" (UID: \"c1491e8a-9740-4437-bec3-76398aa37fb8\") " pod="kube-system/coredns-66bc5c9577-b74vk" Mar 14 00:16:43.017549 kubelet[2621]: I0314 00:16:43.016656 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1491e8a-9740-4437-bec3-76398aa37fb8-config-volume\") pod \"coredns-66bc5c9577-b74vk\" (UID: \"c1491e8a-9740-4437-bec3-76398aa37fb8\") " pod="kube-system/coredns-66bc5c9577-b74vk" Mar 14 00:16:43.017549 kubelet[2621]: I0314 00:16:43.016670 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fgb\" (UniqueName: \"kubernetes.io/projected/db725e06-562b-42a1-a9a6-f123b80276f5-kube-api-access-57fgb\") pod \"coredns-66bc5c9577-bvcxf\" (UID: \"db725e06-562b-42a1-a9a6-f123b80276f5\") " pod="kube-system/coredns-66bc5c9577-bvcxf" Mar 14 00:16:43.017549 kubelet[2621]: I0314 00:16:43.016684 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db725e06-562b-42a1-a9a6-f123b80276f5-config-volume\") pod \"coredns-66bc5c9577-bvcxf\" (UID: \"db725e06-562b-42a1-a9a6-f123b80276f5\") " pod="kube-system/coredns-66bc5c9577-bvcxf" Mar 14 00:16:43.318053 containerd[1511]: time="2026-03-14T00:16:43.317499040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b74vk,Uid:c1491e8a-9740-4437-bec3-76398aa37fb8,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:43.321670 containerd[1511]: time="2026-03-14T00:16:43.321322791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bvcxf,Uid:db725e06-562b-42a1-a9a6-f123b80276f5,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:43.742901 kubelet[2621]: I0314 00:16:43.742806 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mpm9d" podStartSLOduration=6.008955743 podStartE2EDuration="11.742783929s" podCreationTimestamp="2026-03-14 00:16:32 +0000 UTC" firstStartedPulling="2026-03-14 00:16:33.19366673 +0000 UTC m=+5.658057785" lastFinishedPulling="2026-03-14 00:16:38.927494916 +0000 UTC m=+11.391885971" observedRunningTime="2026-03-14 00:16:43.73167226 +0000 UTC m=+16.196063375" watchObservedRunningTime="2026-03-14 00:16:43.742783929 +0000 UTC m=+16.207175024" Mar 14 00:16:43.743796 kubelet[2621]: I0314 00:16:43.742988 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-prsqt" podStartSLOduration=1.783491669 podStartE2EDuration="10.742982264s" podCreationTimestamp="2026-03-14 00:16:33 +0000 UTC" firstStartedPulling="2026-03-14 00:16:33.672490868 +0000 UTC m=+6.136881963" lastFinishedPulling="2026-03-14 00:16:42.631981503 +0000 UTC m=+15.096372558" observedRunningTime="2026-03-14 00:16:43.742486526 +0000 UTC m=+16.206877611" watchObservedRunningTime="2026-03-14 00:16:43.742982264 +0000 UTC m=+16.207373359" Mar 14 00:16:45.859104 systemd-networkd[1401]: cilium_host: Link UP Mar 14 00:16:45.859328 systemd-networkd[1401]: cilium_net: Link UP Mar 14 00:16:45.859333 systemd-networkd[1401]: cilium_net: Gained carrier Mar 14 00:16:45.859898 systemd-networkd[1401]: cilium_host: Gained carrier Mar 14 00:16:45.861468 systemd-networkd[1401]: cilium_host: Gained IPv6LL Mar 14 00:16:46.011890 systemd-networkd[1401]: cilium_vxlan: Link UP Mar 14 00:16:46.011910 systemd-networkd[1401]: cilium_vxlan: Gained carrier Mar 14 00:16:46.184556 kernel: NET: Registered PF_ALG protocol family Mar 14 00:16:46.828089 systemd-networkd[1401]: cilium_net: Gained IPv6LL Mar 14 00:16:46.847023 systemd-networkd[1401]: lxc_health: Link UP Mar 14 00:16:46.858435 systemd-networkd[1401]: lxc_health: Gained carrier Mar 14 00:16:47.395352 systemd-networkd[1401]: lxc0000c73d96e1: Link UP Mar 14 00:16:47.404242 kernel: eth0: renamed from tmpf2e8b Mar 14 00:16:47.410984 systemd-networkd[1401]: lxc0000c73d96e1: Gained carrier Mar 14 00:16:47.426423 kernel: eth0: renamed from tmpb030b Mar 14 00:16:47.427770 systemd-networkd[1401]: lxc12103a62485f: Link UP Mar 14 00:16:47.431238 systemd-networkd[1401]: lxc12103a62485f: Gained carrier Mar 14 00:16:47.595618 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Mar 14 00:16:48.428083 systemd-networkd[1401]: lxc_health: Gained IPv6LL Mar 14 00:16:48.555594 systemd-networkd[1401]: lxc0000c73d96e1: Gained IPv6LL Mar 14 00:16:49.390217 systemd-networkd[1401]: lxc12103a62485f: Gained IPv6LL Mar 14 00:16:50.098230 containerd[1511]: time="2026-03-14T00:16:50.098028461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:50.098230 containerd[1511]: time="2026-03-14T00:16:50.098077030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:50.098230 containerd[1511]: time="2026-03-14T00:16:50.098088080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:50.098230 containerd[1511]: time="2026-03-14T00:16:50.098153799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:50.136556 systemd[1]: Started cri-containerd-b030b6525a640e241c861437526cde5402f39b08cd46506d3f2de83a4899a6fa.scope - libcontainer container b030b6525a640e241c861437526cde5402f39b08cd46506d3f2de83a4899a6fa. Mar 14 00:16:50.161930 containerd[1511]: time="2026-03-14T00:16:50.161823079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:50.161930 containerd[1511]: time="2026-03-14T00:16:50.161874488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:50.161930 containerd[1511]: time="2026-03-14T00:16:50.161886718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:50.162434 containerd[1511]: time="2026-03-14T00:16:50.161995427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:50.200612 systemd[1]: Started cri-containerd-f2e8be5828f4230d5c919aaa3dd85548b9b4302a9f3b63410afda3e2010c2baa.scope - libcontainer container f2e8be5828f4230d5c919aaa3dd85548b9b4302a9f3b63410afda3e2010c2baa. Mar 14 00:16:50.204742 containerd[1511]: time="2026-03-14T00:16:50.203655948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b74vk,Uid:c1491e8a-9740-4437-bec3-76398aa37fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b030b6525a640e241c861437526cde5402f39b08cd46506d3f2de83a4899a6fa\"" Mar 14 00:16:50.211301 containerd[1511]: time="2026-03-14T00:16:50.211230408Z" level=info msg="CreateContainer within sandbox \"b030b6525a640e241c861437526cde5402f39b08cd46506d3f2de83a4899a6fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:16:50.226086 containerd[1511]: time="2026-03-14T00:16:50.226044211Z" level=info msg="CreateContainer within sandbox \"b030b6525a640e241c861437526cde5402f39b08cd46506d3f2de83a4899a6fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56a6e3ba83b2f9a3a5df7c079d986aff8d119fdf3a9b37e8b01c32c4508dbb69\"" Mar 14 00:16:50.229630 containerd[1511]: time="2026-03-14T00:16:50.229086356Z" level=info msg="StartContainer for \"56a6e3ba83b2f9a3a5df7c079d986aff8d119fdf3a9b37e8b01c32c4508dbb69\"" Mar 14 00:16:50.264749 systemd[1]: Started cri-containerd-56a6e3ba83b2f9a3a5df7c079d986aff8d119fdf3a9b37e8b01c32c4508dbb69.scope - libcontainer container 56a6e3ba83b2f9a3a5df7c079d986aff8d119fdf3a9b37e8b01c32c4508dbb69. Mar 14 00:16:50.283990 containerd[1511]: time="2026-03-14T00:16:50.283944766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bvcxf,Uid:db725e06-562b-42a1-a9a6-f123b80276f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e8be5828f4230d5c919aaa3dd85548b9b4302a9f3b63410afda3e2010c2baa\"" Mar 14 00:16:50.292154 containerd[1511]: time="2026-03-14T00:16:50.291935259Z" level=info msg="CreateContainer within sandbox \"f2e8be5828f4230d5c919aaa3dd85548b9b4302a9f3b63410afda3e2010c2baa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:16:50.308908 containerd[1511]: time="2026-03-14T00:16:50.307887686Z" level=info msg="StartContainer for \"56a6e3ba83b2f9a3a5df7c079d986aff8d119fdf3a9b37e8b01c32c4508dbb69\" returns successfully" Mar 14 00:16:50.309682 containerd[1511]: time="2026-03-14T00:16:50.309552411Z" level=info msg="CreateContainer within sandbox \"f2e8be5828f4230d5c919aaa3dd85548b9b4302a9f3b63410afda3e2010c2baa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"739ec090d862c3d85894cffd303ca92d051a654a8ff331e849b751c055241da1\"" Mar 14 00:16:50.310127 containerd[1511]: time="2026-03-14T00:16:50.310001195Z" level=info msg="StartContainer for \"739ec090d862c3d85894cffd303ca92d051a654a8ff331e849b751c055241da1\"" Mar 14 00:16:50.337722 systemd[1]: Started cri-containerd-739ec090d862c3d85894cffd303ca92d051a654a8ff331e849b751c055241da1.scope - libcontainer container 739ec090d862c3d85894cffd303ca92d051a654a8ff331e849b751c055241da1. Mar 14 00:16:50.368779 containerd[1511]: time="2026-03-14T00:16:50.368196985Z" level=info msg="StartContainer for \"739ec090d862c3d85894cffd303ca92d051a654a8ff331e849b751c055241da1\" returns successfully" Mar 14 00:16:50.757643 kubelet[2621]: I0314 00:16:50.757292 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b74vk" podStartSLOduration=17.757268402 podStartE2EDuration="17.757268402s" podCreationTimestamp="2026-03-14 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:50.742319741 +0000 UTC m=+23.206710826" watchObservedRunningTime="2026-03-14 00:16:50.757268402 +0000 UTC m=+23.221659497" Mar 14 00:16:50.757643 kubelet[2621]: I0314 00:16:50.757507 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bvcxf" podStartSLOduration=17.75749935 podStartE2EDuration="17.75749935s" podCreationTimestamp="2026-03-14 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:50.754513093 +0000 UTC m=+23.218904138" watchObservedRunningTime="2026-03-14 00:16:50.75749935 +0000 UTC m=+23.221890445" Mar 14 00:17:51.060990 systemd[1]: Started sshd@7-204.168.148.110:22-68.220.241.50:34360.service - OpenSSH per-connection server daemon (68.220.241.50:34360). Mar 14 00:17:51.826442 sshd[3999]: Accepted publickey for core from 68.220.241.50 port 34360 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:51.829882 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:51.838627 systemd-logind[1490]: New session 8 of user core. Mar 14 00:17:51.844663 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:17:52.454795 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:52.461139 systemd[1]: sshd@7-204.168.148.110:22-68.220.241.50:34360.service: Deactivated successfully. Mar 14 00:17:52.464183 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:17:52.465530 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:17:52.469019 systemd-logind[1490]: Removed session 8. Mar 14 00:17:57.593769 systemd[1]: Started sshd@8-204.168.148.110:22-68.220.241.50:59574.service - OpenSSH per-connection server daemon (68.220.241.50:59574). Mar 14 00:17:58.349020 sshd[4013]: Accepted publickey for core from 68.220.241.50 port 59574 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:58.351806 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:58.360499 systemd-logind[1490]: New session 9 of user core. Mar 14 00:17:58.365613 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:17:58.967038 sshd[4013]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:58.972613 systemd[1]: sshd@8-204.168.148.110:22-68.220.241.50:59574.service: Deactivated successfully. Mar 14 00:17:58.976863 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:58.979758 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:58.981760 systemd-logind[1490]: Removed session 9. Mar 14 00:18:04.111024 systemd[1]: Started sshd@9-204.168.148.110:22-68.220.241.50:37914.service - OpenSSH per-connection server daemon (68.220.241.50:37914). Mar 14 00:18:04.862192 sshd[4027]: Accepted publickey for core from 68.220.241.50 port 37914 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:04.863663 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:04.872215 systemd-logind[1490]: New session 10 of user core. Mar 14 00:18:04.878619 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:18:05.476616 sshd[4027]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:05.479931 systemd[1]: sshd@9-204.168.148.110:22-68.220.241.50:37914.service: Deactivated successfully. Mar 14 00:18:05.482221 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:18:05.484622 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:18:05.485919 systemd-logind[1490]: Removed session 10. Mar 14 00:18:05.609758 systemd[1]: Started sshd@10-204.168.148.110:22-68.220.241.50:37924.service - OpenSSH per-connection server daemon (68.220.241.50:37924). Mar 14 00:18:06.342696 sshd[4043]: Accepted publickey for core from 68.220.241.50 port 37924 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:06.348548 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:06.358227 systemd-logind[1490]: New session 11 of user core. Mar 14 00:18:06.363669 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:18:06.978842 sshd[4043]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:06.985994 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:18:06.987894 systemd[1]: sshd@10-204.168.148.110:22-68.220.241.50:37924.service: Deactivated successfully. Mar 14 00:18:06.991816 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:18:06.993777 systemd-logind[1490]: Removed session 11. Mar 14 00:18:07.117989 systemd[1]: Started sshd@11-204.168.148.110:22-68.220.241.50:37930.service - OpenSSH per-connection server daemon (68.220.241.50:37930). Mar 14 00:18:07.882453 sshd[4054]: Accepted publickey for core from 68.220.241.50 port 37930 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:07.884950 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:07.895249 systemd-logind[1490]: New session 12 of user core. Mar 14 00:18:07.902882 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:18:08.517559 sshd[4054]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:08.524203 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:18:08.525583 systemd[1]: sshd@11-204.168.148.110:22-68.220.241.50:37930.service: Deactivated successfully. Mar 14 00:18:08.531194 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:18:08.533641 systemd-logind[1490]: Removed session 12. Mar 14 00:18:13.654853 systemd[1]: Started sshd@12-204.168.148.110:22-68.220.241.50:36702.service - OpenSSH per-connection server daemon (68.220.241.50:36702). Mar 14 00:18:14.411427 sshd[4067]: Accepted publickey for core from 68.220.241.50 port 36702 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:14.413999 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:14.421922 systemd-logind[1490]: New session 13 of user core. Mar 14 00:18:14.432682 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:18:15.036906 sshd[4067]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:15.044272 systemd[1]: sshd@12-204.168.148.110:22-68.220.241.50:36702.service: Deactivated successfully. Mar 14 00:18:15.048973 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:18:15.050455 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:18:15.052262 systemd-logind[1490]: Removed session 13. Mar 14 00:18:15.175425 systemd[1]: Started sshd@13-204.168.148.110:22-68.220.241.50:36706.service - OpenSSH per-connection server daemon (68.220.241.50:36706). Mar 14 00:18:15.940435 sshd[4080]: Accepted publickey for core from 68.220.241.50 port 36706 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:15.943510 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:15.952532 systemd-logind[1490]: New session 14 of user core. Mar 14 00:18:15.961644 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:18:16.570562 sshd[4080]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:16.575643 systemd[1]: sshd@13-204.168.148.110:22-68.220.241.50:36706.service: Deactivated successfully. Mar 14 00:18:16.579879 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:18:16.583522 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:18:16.586129 systemd-logind[1490]: Removed session 14. Mar 14 00:18:16.711917 systemd[1]: Started sshd@14-204.168.148.110:22-68.220.241.50:36718.service - OpenSSH per-connection server daemon (68.220.241.50:36718). Mar 14 00:18:17.490096 sshd[4091]: Accepted publickey for core from 68.220.241.50 port 36718 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:17.493409 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:17.502655 systemd-logind[1490]: New session 15 of user core. Mar 14 00:18:17.508802 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:18:18.539701 sshd[4091]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:18.544469 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:18:18.544948 systemd[1]: sshd@14-204.168.148.110:22-68.220.241.50:36718.service: Deactivated successfully. Mar 14 00:18:18.547176 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:18:18.548321 systemd-logind[1490]: Removed session 15. Mar 14 00:18:18.672805 systemd[1]: Started sshd@15-204.168.148.110:22-68.220.241.50:36720.service - OpenSSH per-connection server daemon (68.220.241.50:36720). Mar 14 00:18:19.417402 sshd[4107]: Accepted publickey for core from 68.220.241.50 port 36720 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:19.420173 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:19.427197 systemd-logind[1490]: New session 16 of user core. Mar 14 00:18:19.432520 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:18:20.124565 sshd[4107]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:20.133079 systemd[1]: sshd@15-204.168.148.110:22-68.220.241.50:36720.service: Deactivated successfully. Mar 14 00:18:20.136910 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:18:20.138588 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:18:20.140943 systemd-logind[1490]: Removed session 16. Mar 14 00:18:20.265169 systemd[1]: Started sshd@16-204.168.148.110:22-68.220.241.50:36736.service - OpenSSH per-connection server daemon (68.220.241.50:36736). Mar 14 00:18:21.015434 sshd[4120]: Accepted publickey for core from 68.220.241.50 port 36736 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:21.018148 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:21.026914 systemd-logind[1490]: New session 17 of user core. Mar 14 00:18:21.031594 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:18:21.603896 sshd[4120]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:21.609221 systemd[1]: sshd@16-204.168.148.110:22-68.220.241.50:36736.service: Deactivated successfully. Mar 14 00:18:21.613134 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:18:21.617484 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:18:21.621181 systemd-logind[1490]: Removed session 17. Mar 14 00:18:26.739941 systemd[1]: Started sshd@17-204.168.148.110:22-68.220.241.50:48280.service - OpenSSH per-connection server daemon (68.220.241.50:48280). Mar 14 00:18:27.496942 sshd[4135]: Accepted publickey for core from 68.220.241.50 port 48280 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:27.499451 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:27.505661 systemd-logind[1490]: New session 18 of user core. Mar 14 00:18:27.509558 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:18:28.077616 sshd[4135]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:28.081141 systemd[1]: sshd@17-204.168.148.110:22-68.220.241.50:48280.service: Deactivated successfully. Mar 14 00:18:28.083876 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:18:28.085280 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:18:28.086601 systemd-logind[1490]: Removed session 18. Mar 14 00:18:33.205028 systemd[1]: Started sshd@18-204.168.148.110:22-68.220.241.50:50208.service - OpenSSH per-connection server daemon (68.220.241.50:50208). Mar 14 00:18:33.942648 sshd[4149]: Accepted publickey for core from 68.220.241.50 port 50208 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:33.945729 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:33.954470 systemd-logind[1490]: New session 19 of user core. Mar 14 00:18:33.963671 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:18:34.559805 sshd[4149]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:34.566202 systemd[1]: sshd@18-204.168.148.110:22-68.220.241.50:50208.service: Deactivated successfully. Mar 14 00:18:34.571081 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:18:34.574904 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:18:34.577987 systemd-logind[1490]: Removed session 19. Mar 14 00:18:34.699668 systemd[1]: Started sshd@19-204.168.148.110:22-68.220.241.50:50212.service - OpenSSH per-connection server daemon (68.220.241.50:50212). Mar 14 00:18:35.463805 sshd[4164]: Accepted publickey for core from 68.220.241.50 port 50212 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:35.468968 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:35.481427 systemd-logind[1490]: New session 20 of user core. Mar 14 00:18:35.490024 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:18:37.213345 containerd[1511]: time="2026-03-14T00:18:37.213178481Z" level=info msg="StopContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" with timeout 30 (s)" Mar 14 00:18:37.214088 containerd[1511]: time="2026-03-14T00:18:37.213871682Z" level=info msg="Stop container \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" with signal terminated" Mar 14 00:18:37.270487 systemd[1]: cri-containerd-615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0.scope: Deactivated successfully. Mar 14 00:18:37.279187 containerd[1511]: time="2026-03-14T00:18:37.279107386Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:18:37.292417 containerd[1511]: time="2026-03-14T00:18:37.292342426Z" level=info msg="StopContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" with timeout 2 (s)" Mar 14 00:18:37.293766 containerd[1511]: time="2026-03-14T00:18:37.293655979Z" level=info msg="Stop container \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" with signal terminated" Mar 14 00:18:37.305212 systemd-networkd[1401]: lxc_health: Link DOWN Mar 14 00:18:37.305226 systemd-networkd[1401]: lxc_health: Lost carrier Mar 14 00:18:37.339350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0-rootfs.mount: Deactivated successfully. Mar 14 00:18:37.340864 systemd[1]: cri-containerd-04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1.scope: Deactivated successfully. Mar 14 00:18:37.341217 systemd[1]: cri-containerd-04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1.scope: Consumed 5.919s CPU time. Mar 14 00:18:37.357065 containerd[1511]: time="2026-03-14T00:18:37.356545019Z" level=info msg="shim disconnected" id=615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0 namespace=k8s.io Mar 14 00:18:37.357065 containerd[1511]: time="2026-03-14T00:18:37.356664755Z" level=warning msg="cleaning up after shim disconnected" id=615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0 namespace=k8s.io Mar 14 00:18:37.357065 containerd[1511]: time="2026-03-14T00:18:37.356680436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:37.378790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1-rootfs.mount: Deactivated successfully. Mar 14 00:18:37.388611 containerd[1511]: time="2026-03-14T00:18:37.388356749Z" level=info msg="shim disconnected" id=04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1 namespace=k8s.io Mar 14 00:18:37.388873 containerd[1511]: time="2026-03-14T00:18:37.388624171Z" level=warning msg="cleaning up after shim disconnected" id=04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1 namespace=k8s.io Mar 14 00:18:37.388873 containerd[1511]: time="2026-03-14T00:18:37.388639971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:37.389083 containerd[1511]: time="2026-03-14T00:18:37.389054749Z" level=info msg="StopContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" returns successfully" Mar 14 00:18:37.390596 containerd[1511]: time="2026-03-14T00:18:37.389747430Z" level=info msg="StopPodSandbox for \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\"" Mar 14 00:18:37.390596 containerd[1511]: time="2026-03-14T00:18:37.389768769Z" level=info msg="Container to stop \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.391983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62-shm.mount: Deactivated successfully. Mar 14 00:18:37.398084 systemd[1]: cri-containerd-d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62.scope: Deactivated successfully. Mar 14 00:18:37.415268 containerd[1511]: time="2026-03-14T00:18:37.415152209Z" level=info msg="StopContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" returns successfully" Mar 14 00:18:37.415917 containerd[1511]: time="2026-03-14T00:18:37.415897388Z" level=info msg="StopPodSandbox for \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\"" Mar 14 00:18:37.415990 containerd[1511]: time="2026-03-14T00:18:37.415920398Z" level=info msg="Container to stop \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.415990 containerd[1511]: time="2026-03-14T00:18:37.415929128Z" level=info msg="Container to stop \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.415990 containerd[1511]: time="2026-03-14T00:18:37.415936567Z" level=info msg="Container to stop \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.415990 containerd[1511]: time="2026-03-14T00:18:37.415943937Z" level=info msg="Container to stop \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.415990 containerd[1511]: time="2026-03-14T00:18:37.415951127Z" level=info msg="Container to stop \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:37.417788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12-shm.mount: Deactivated successfully. Mar 14 00:18:37.430177 systemd[1]: cri-containerd-222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12.scope: Deactivated successfully. Mar 14 00:18:37.433823 containerd[1511]: time="2026-03-14T00:18:37.433439867Z" level=info msg="shim disconnected" id=d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62 namespace=k8s.io Mar 14 00:18:37.433823 containerd[1511]: time="2026-03-14T00:18:37.433474396Z" level=warning msg="cleaning up after shim disconnected" id=d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62 namespace=k8s.io Mar 14 00:18:37.433823 containerd[1511]: time="2026-03-14T00:18:37.433481416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:37.461539 containerd[1511]: time="2026-03-14T00:18:37.461321318Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:18:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:18:37.464008 containerd[1511]: time="2026-03-14T00:18:37.462431576Z" level=info msg="TearDown network for sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" successfully" Mar 14 00:18:37.464008 containerd[1511]: time="2026-03-14T00:18:37.462445386Z" level=info msg="StopPodSandbox for \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" returns successfully" Mar 14 00:18:37.471982 containerd[1511]: time="2026-03-14T00:18:37.471931151Z" level=info msg="shim disconnected" id=222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12 namespace=k8s.io Mar 14 00:18:37.472074 containerd[1511]: time="2026-03-14T00:18:37.471979669Z" level=warning msg="cleaning up after shim disconnected" id=222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12 namespace=k8s.io Mar 14 00:18:37.472074 containerd[1511]: time="2026-03-14T00:18:37.471992659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:37.494885 containerd[1511]: time="2026-03-14T00:18:37.494833150Z" level=info msg="TearDown network for sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" successfully" Mar 14 00:18:37.494992 containerd[1511]: time="2026-03-14T00:18:37.494875629Z" level=info msg="StopPodSandbox for \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" returns successfully" Mar 14 00:18:37.593647 kubelet[2621]: I0314 00:18:37.593537 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-run\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.593647 kubelet[2621]: I0314 00:18:37.593647 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-cilium-config-path\") pod \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\" (UID: \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593695 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1560b43-82f7-40a9-ba90-539323b979cb-clustermesh-secrets\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593729 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-lib-modules\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593767 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-hostproc\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593800 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-xtables-lock\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593838 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbh9\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-kube-api-access-csbh9\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595270 kubelet[2621]: I0314 00:18:37.593879 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-cgroup\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.595559 kubelet[2621]: I0314 00:18:37.593944 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.595559 kubelet[2621]: I0314 00:18:37.594002 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.595559 kubelet[2621]: I0314 00:18:37.594036 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.595559 kubelet[2621]: I0314 00:18:37.594069 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599233 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-hubble-tls\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599294 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cni-path\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599319 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-kernel\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599345 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-net\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599425 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d67kl\" (UniqueName: \"kubernetes.io/projected/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-kube-api-access-d67kl\") pod \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\" (UID: \"9d11115d-b9eb-430a-b2a9-ef8cddd745cf\") " Mar 14 00:18:37.600411 kubelet[2621]: I0314 00:18:37.599447 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-etc-cni-netd\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599466 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-bpf-maps\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599490 2621 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-config-path\") pod \"f1560b43-82f7-40a9-ba90-539323b979cb\" (UID: \"f1560b43-82f7-40a9-ba90-539323b979cb\") " Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599545 2621 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-cgroup\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599561 2621 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-lib-modules\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599590 2621 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-hostproc\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.600760 kubelet[2621]: I0314 00:18:37.599605 2621 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-xtables-lock\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.602727 kubelet[2621]: I0314 00:18:37.602581 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.605417 kubelet[2621]: I0314 00:18:37.605151 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1560b43-82f7-40a9-ba90-539323b979cb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:18:37.605417 kubelet[2621]: I0314 00:18:37.605269 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-kube-api-access-csbh9" (OuterVolumeSpecName: "kube-api-access-csbh9") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "kube-api-access-csbh9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:37.605417 kubelet[2621]: I0314 00:18:37.605311 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.605417 kubelet[2621]: I0314 00:18:37.605339 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.605734 kubelet[2621]: I0314 00:18:37.605707 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.605871 kubelet[2621]: I0314 00:18:37.605852 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.606797 kubelet[2621]: I0314 00:18:37.606770 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:37.611450 kubelet[2621]: I0314 00:18:37.611317 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:37.613289 kubelet[2621]: I0314 00:18:37.613257 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1560b43-82f7-40a9-ba90-539323b979cb" (UID: "f1560b43-82f7-40a9-ba90-539323b979cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:37.614095 kubelet[2621]: I0314 00:18:37.614009 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d11115d-b9eb-430a-b2a9-ef8cddd745cf" (UID: "9d11115d-b9eb-430a-b2a9-ef8cddd745cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:37.616717 kubelet[2621]: I0314 00:18:37.616652 2621 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-kube-api-access-d67kl" (OuterVolumeSpecName: "kube-api-access-d67kl") pod "9d11115d-b9eb-430a-b2a9-ef8cddd745cf" (UID: "9d11115d-b9eb-430a-b2a9-ef8cddd745cf"). InnerVolumeSpecName "kube-api-access-d67kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:37.627308 systemd[1]: Removed slice kubepods-burstable-podf1560b43_82f7_40a9_ba90_539323b979cb.slice - libcontainer container kubepods-burstable-podf1560b43_82f7_40a9_ba90_539323b979cb.slice. Mar 14 00:18:37.627519 systemd[1]: kubepods-burstable-podf1560b43_82f7_40a9_ba90_539323b979cb.slice: Consumed 6.029s CPU time. Mar 14 00:18:37.631783 systemd[1]: Removed slice kubepods-besteffort-pod9d11115d_b9eb_430a_b2a9_ef8cddd745cf.slice - libcontainer container kubepods-besteffort-pod9d11115d_b9eb_430a_b2a9_ef8cddd745cf.slice. Mar 14 00:18:37.695772 kubelet[2621]: E0314 00:18:37.695726 2621 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:18:37.700224 kubelet[2621]: I0314 00:18:37.700160 2621 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-run\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700224 kubelet[2621]: I0314 00:18:37.700194 2621 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-cilium-config-path\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700224 kubelet[2621]: I0314 00:18:37.700210 2621 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1560b43-82f7-40a9-ba90-539323b979cb-clustermesh-secrets\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700224 kubelet[2621]: I0314 00:18:37.700228 2621 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-csbh9\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-kube-api-access-csbh9\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700244 2621 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1560b43-82f7-40a9-ba90-539323b979cb-hubble-tls\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700258 2621 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-cni-path\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700271 2621 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700289 2621 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-host-proc-sys-net\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700303 2621 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d67kl\" (UniqueName: \"kubernetes.io/projected/9d11115d-b9eb-430a-b2a9-ef8cddd745cf-kube-api-access-d67kl\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700316 2621 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-etc-cni-netd\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700330 2621 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1560b43-82f7-40a9-ba90-539323b979cb-bpf-maps\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.700510 kubelet[2621]: I0314 00:18:37.700343 2621 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1560b43-82f7-40a9-ba90-539323b979cb-cilium-config-path\") on node \"ci-4081-3-6-n-8ea3e741de\" DevicePath \"\"" Mar 14 00:18:37.986402 kubelet[2621]: I0314 00:18:37.985276 2621 scope.go:117] "RemoveContainer" containerID="04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1" Mar 14 00:18:37.992389 containerd[1511]: time="2026-03-14T00:18:37.992309918Z" level=info msg="RemoveContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\"" Mar 14 00:18:38.005070 containerd[1511]: time="2026-03-14T00:18:38.005026415Z" level=info msg="RemoveContainer for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" returns successfully" Mar 14 00:18:38.006150 kubelet[2621]: I0314 00:18:38.006085 2621 scope.go:117] "RemoveContainer" containerID="522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e" Mar 14 00:18:38.009617 containerd[1511]: time="2026-03-14T00:18:38.009459783Z" level=info msg="RemoveContainer for \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\"" Mar 14 00:18:38.022285 containerd[1511]: time="2026-03-14T00:18:38.022210563Z" level=info msg="RemoveContainer for \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\" returns successfully" Mar 14 00:18:38.025896 kubelet[2621]: I0314 00:18:38.025836 2621 scope.go:117] "RemoveContainer" containerID="d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e" Mar 14 00:18:38.030154 containerd[1511]: time="2026-03-14T00:18:38.030115297Z" level=info msg="RemoveContainer for \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\"" Mar 14 00:18:38.045423 containerd[1511]: time="2026-03-14T00:18:38.043929667Z" level=info msg="RemoveContainer for \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\" returns successfully" Mar 14 00:18:38.045631 kubelet[2621]: I0314 00:18:38.044522 2621 scope.go:117] "RemoveContainer" containerID="afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110" Mar 14 00:18:38.048548 containerd[1511]: time="2026-03-14T00:18:38.048477213Z" level=info msg="RemoveContainer for \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\"" Mar 14 00:18:38.056972 containerd[1511]: time="2026-03-14T00:18:38.056909981Z" level=info msg="RemoveContainer for \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\" returns successfully" Mar 14 00:18:38.057213 kubelet[2621]: I0314 00:18:38.057175 2621 scope.go:117] "RemoveContainer" containerID="15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add" Mar 14 00:18:38.063074 containerd[1511]: time="2026-03-14T00:18:38.062952356Z" level=info msg="RemoveContainer for \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\"" Mar 14 00:18:38.078701 containerd[1511]: time="2026-03-14T00:18:38.078175558Z" level=info msg="RemoveContainer for \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\" returns successfully" Mar 14 00:18:38.078832 kubelet[2621]: I0314 00:18:38.078412 2621 scope.go:117] "RemoveContainer" containerID="04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1" Mar 14 00:18:38.079963 containerd[1511]: time="2026-03-14T00:18:38.079898321Z" level=error msg="ContainerStatus for \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\": not found" Mar 14 00:18:38.080668 kubelet[2621]: E0314 00:18:38.080601 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\": not found" containerID="04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1" Mar 14 00:18:38.080668 kubelet[2621]: I0314 00:18:38.080626 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1"} err="failed to get container status \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\": rpc error: code = NotFound desc = an error occurred when try to find container \"04f4a8286b1e282c0e9e5204271a2d99a7ccfae56347dcffa9b64e0e7d4e6da1\": not found" Mar 14 00:18:38.080668 kubelet[2621]: I0314 00:18:38.080654 2621 scope.go:117] "RemoveContainer" containerID="522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e" Mar 14 00:18:38.081845 containerd[1511]: time="2026-03-14T00:18:38.081644942Z" level=error msg="ContainerStatus for \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\": not found" Mar 14 00:18:38.083460 kubelet[2621]: E0314 00:18:38.083431 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\": not found" containerID="522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e" Mar 14 00:18:38.083460 kubelet[2621]: I0314 00:18:38.083458 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e"} err="failed to get container status \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\": rpc error: code = NotFound desc = an error occurred when try to find container \"522df89deac3e129113e0c5ea99f5b5a5bf5569d598fd07ac5376dfddeee281e\": not found" Mar 14 00:18:38.083599 kubelet[2621]: I0314 00:18:38.083472 2621 scope.go:117] "RemoveContainer" containerID="d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e" Mar 14 00:18:38.085074 containerd[1511]: time="2026-03-14T00:18:38.084968531Z" level=error msg="ContainerStatus for \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\": not found" Mar 14 00:18:38.085121 kubelet[2621]: E0314 00:18:38.085070 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\": not found" containerID="d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e" Mar 14 00:18:38.085121 kubelet[2621]: I0314 00:18:38.085088 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e"} err="failed to get container status \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8241cccb50b7d3ed403878c037ef72f674eb4ca022df1b09a6100192a5b793e\": not found" Mar 14 00:18:38.085121 kubelet[2621]: I0314 00:18:38.085108 2621 scope.go:117] "RemoveContainer" containerID="afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110" Mar 14 00:18:38.085470 kubelet[2621]: E0314 00:18:38.085332 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\": not found" containerID="afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110" Mar 14 00:18:38.085470 kubelet[2621]: I0314 00:18:38.085347 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110"} err="failed to get container status \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\": rpc error: code = NotFound desc = an error occurred when try to find container \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\": not found" Mar 14 00:18:38.085470 kubelet[2621]: I0314 00:18:38.085358 2621 scope.go:117] "RemoveContainer" containerID="15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add" Mar 14 00:18:38.085527 containerd[1511]: time="2026-03-14T00:18:38.085224484Z" level=error msg="ContainerStatus for \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afd9ffde36d7e23ca07629fa1c3d4d7a3095b467a24dd62d8df74826a2852110\": not found" Mar 14 00:18:38.086785 containerd[1511]: time="2026-03-14T00:18:38.086471860Z" level=error msg="ContainerStatus for \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\": not found" Mar 14 00:18:38.086829 kubelet[2621]: E0314 00:18:38.086575 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\": not found" containerID="15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add" Mar 14 00:18:38.086829 kubelet[2621]: I0314 00:18:38.086597 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add"} err="failed to get container status \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\": rpc error: code = NotFound desc = an error occurred when try to find container \"15795e03ab0ae928886b0d6aa36b61f314fd340747d0a472319ccd8b3c561add\": not found" Mar 14 00:18:38.086829 kubelet[2621]: I0314 00:18:38.086608 2621 scope.go:117] "RemoveContainer" containerID="615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0" Mar 14 00:18:38.087291 containerd[1511]: time="2026-03-14T00:18:38.087277318Z" level=info msg="RemoveContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\"" Mar 14 00:18:38.092396 containerd[1511]: time="2026-03-14T00:18:38.092351389Z" level=info msg="RemoveContainer for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" returns successfully" Mar 14 00:18:38.092608 kubelet[2621]: I0314 00:18:38.092545 2621 scope.go:117] "RemoveContainer" containerID="615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0" Mar 14 00:18:38.092806 containerd[1511]: time="2026-03-14T00:18:38.092785028Z" level=error msg="ContainerStatus for \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\": not found" Mar 14 00:18:38.094388 kubelet[2621]: E0314 00:18:38.092868 2621 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\": not found" containerID="615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0" Mar 14 00:18:38.094388 kubelet[2621]: I0314 00:18:38.092882 2621 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0"} err="failed to get container status \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"615a5d3da8cc241cf0fad7c19389f5fc05d600b65290e7de9e55e639dd331aa0\": not found" Mar 14 00:18:38.245156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62-rootfs.mount: Deactivated successfully. Mar 14 00:18:38.245348 systemd[1]: var-lib-kubelet-pods-9d11115d\x2db9eb\x2d430a\x2db2a9\x2def8cddd745cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd67kl.mount: Deactivated successfully. Mar 14 00:18:38.245510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12-rootfs.mount: Deactivated successfully. Mar 14 00:18:38.245655 systemd[1]: var-lib-kubelet-pods-f1560b43\x2d82f7\x2d40a9\x2dba90\x2d539323b979cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsbh9.mount: Deactivated successfully. Mar 14 00:18:38.245799 systemd[1]: var-lib-kubelet-pods-f1560b43\x2d82f7\x2d40a9\x2dba90\x2d539323b979cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:18:38.245942 systemd[1]: var-lib-kubelet-pods-f1560b43\x2d82f7\x2d40a9\x2dba90\x2d539323b979cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:18:39.270845 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:39.277162 systemd[1]: sshd@19-204.168.148.110:22-68.220.241.50:50212.service: Deactivated successfully. Mar 14 00:18:39.281041 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:18:39.284408 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:18:39.286769 systemd-logind[1490]: Removed session 20. Mar 14 00:18:39.323423 kubelet[2621]: I0314 00:18:39.322933 2621 setters.go:543] "Node became not ready" node="ci-4081-3-6-n-8ea3e741de" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:18:39Z","lastTransitionTime":"2026-03-14T00:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:18:39.407037 systemd[1]: Started sshd@20-204.168.148.110:22-68.220.241.50:50214.service - OpenSSH per-connection server daemon (68.220.241.50:50214). Mar 14 00:18:39.619146 kubelet[2621]: I0314 00:18:39.618814 2621 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d11115d-b9eb-430a-b2a9-ef8cddd745cf" path="/var/lib/kubelet/pods/9d11115d-b9eb-430a-b2a9-ef8cddd745cf/volumes" Mar 14 00:18:39.619995 kubelet[2621]: I0314 00:18:39.619959 2621 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1560b43-82f7-40a9-ba90-539323b979cb" path="/var/lib/kubelet/pods/f1560b43-82f7-40a9-ba90-539323b979cb/volumes" Mar 14 00:18:40.155457 sshd[4326]: Accepted publickey for core from 68.220.241.50 port 50214 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:40.156741 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:40.163010 systemd-logind[1490]: New session 21 of user core. Mar 14 00:18:40.171595 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:18:41.087554 systemd[1]: Created slice kubepods-burstable-podb7ceddda_48f5_46d5_80ba_045d60370b57.slice - libcontainer container kubepods-burstable-podb7ceddda_48f5_46d5_80ba_045d60370b57.slice. Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221391 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-cilium-run\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221428 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-bpf-maps\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221443 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-xtables-lock\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221458 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7ceddda-48f5-46d5-80ba-045d60370b57-clustermesh-secrets\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221473 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7ceddda-48f5-46d5-80ba-045d60370b57-cilium-ipsec-secrets\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.221741 kubelet[2621]: I0314 00:18:41.221488 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-hostproc\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221502 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-cilium-cgroup\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221522 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-cni-path\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221539 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-lib-modules\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221556 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7qb\" (UniqueName: \"kubernetes.io/projected/b7ceddda-48f5-46d5-80ba-045d60370b57-kube-api-access-tq7qb\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221571 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-etc-cni-netd\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222517 kubelet[2621]: I0314 00:18:41.221585 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-host-proc-sys-kernel\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222728 kubelet[2621]: I0314 00:18:41.221599 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7ceddda-48f5-46d5-80ba-045d60370b57-host-proc-sys-net\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222728 kubelet[2621]: I0314 00:18:41.221613 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7ceddda-48f5-46d5-80ba-045d60370b57-hubble-tls\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.222728 kubelet[2621]: I0314 00:18:41.221632 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7ceddda-48f5-46d5-80ba-045d60370b57-cilium-config-path\") pod \"cilium-4pb22\" (UID: \"b7ceddda-48f5-46d5-80ba-045d60370b57\") " pod="kube-system/cilium-4pb22" Mar 14 00:18:41.228450 sshd[4326]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:41.234488 systemd[1]: sshd@20-204.168.148.110:22-68.220.241.50:50214.service: Deactivated successfully. Mar 14 00:18:41.238047 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:18:41.239133 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:18:41.240426 systemd-logind[1490]: Removed session 21. Mar 14 00:18:41.377644 systemd[1]: Started sshd@21-204.168.148.110:22-68.220.241.50:50222.service - OpenSSH per-connection server daemon (68.220.241.50:50222). Mar 14 00:18:41.395177 containerd[1511]: time="2026-03-14T00:18:41.394663719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pb22,Uid:b7ceddda-48f5-46d5-80ba-045d60370b57,Namespace:kube-system,Attempt:0,}" Mar 14 00:18:41.427712 containerd[1511]: time="2026-03-14T00:18:41.427299155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:41.427712 containerd[1511]: time="2026-03-14T00:18:41.427387872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:41.427712 containerd[1511]: time="2026-03-14T00:18:41.427404692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:41.427712 containerd[1511]: time="2026-03-14T00:18:41.427517439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:41.451591 systemd[1]: Started cri-containerd-0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da.scope - libcontainer container 0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da. Mar 14 00:18:41.476096 containerd[1511]: time="2026-03-14T00:18:41.475994004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pb22,Uid:b7ceddda-48f5-46d5-80ba-045d60370b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\"" Mar 14 00:18:41.481654 containerd[1511]: time="2026-03-14T00:18:41.481616879Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:18:41.490947 containerd[1511]: time="2026-03-14T00:18:41.490847060Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a\"" Mar 14 00:18:41.491747 containerd[1511]: time="2026-03-14T00:18:41.491719957Z" level=info msg="StartContainer for \"20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a\"" Mar 14 00:18:41.513532 systemd[1]: Started cri-containerd-20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a.scope - libcontainer container 20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a. Mar 14 00:18:41.535756 containerd[1511]: time="2026-03-14T00:18:41.535671120Z" level=info msg="StartContainer for \"20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a\" returns successfully" Mar 14 00:18:41.545412 systemd[1]: cri-containerd-20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a.scope: Deactivated successfully. Mar 14 00:18:41.578205 containerd[1511]: time="2026-03-14T00:18:41.578151070Z" level=info msg="shim disconnected" id=20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a namespace=k8s.io Mar 14 00:18:41.578205 containerd[1511]: time="2026-03-14T00:18:41.578197989Z" level=warning msg="cleaning up after shim disconnected" id=20f6f3e04ab2cc9605eb400b3d6562e5fb674a14880419028c651cb70fac7b9a namespace=k8s.io Mar 14 00:18:41.578205 containerd[1511]: time="2026-03-14T00:18:41.578205429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:42.021775 containerd[1511]: time="2026-03-14T00:18:42.021667764Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:18:42.035983 containerd[1511]: time="2026-03-14T00:18:42.035670699Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00\"" Mar 14 00:18:42.037667 containerd[1511]: time="2026-03-14T00:18:42.037631659Z" level=info msg="StartContainer for \"eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00\"" Mar 14 00:18:42.090629 systemd[1]: Started cri-containerd-eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00.scope - libcontainer container eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00. Mar 14 00:18:42.112985 sshd[4341]: Accepted publickey for core from 68.220.241.50 port 50222 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:42.115618 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:42.131604 systemd-logind[1490]: New session 22 of user core. Mar 14 00:18:42.140277 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:18:42.151823 containerd[1511]: time="2026-03-14T00:18:42.150670368Z" level=info msg="StartContainer for \"eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00\" returns successfully" Mar 14 00:18:42.164421 systemd[1]: cri-containerd-eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00.scope: Deactivated successfully. Mar 14 00:18:42.194965 containerd[1511]: time="2026-03-14T00:18:42.194867466Z" level=info msg="shim disconnected" id=eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00 namespace=k8s.io Mar 14 00:18:42.194965 containerd[1511]: time="2026-03-14T00:18:42.194942484Z" level=warning msg="cleaning up after shim disconnected" id=eed5636f62749f749e702495aa2c5f46bb87fd72963e1ae452ee562a5e90ad00 namespace=k8s.io Mar 14 00:18:42.194965 containerd[1511]: time="2026-03-14T00:18:42.194952184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:42.633692 sshd[4341]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:42.639772 systemd[1]: sshd@21-204.168.148.110:22-68.220.241.50:50222.service: Deactivated successfully. Mar 14 00:18:42.644647 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:18:42.647948 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:18:42.650675 systemd-logind[1490]: Removed session 22. Mar 14 00:18:42.697892 kubelet[2621]: E0314 00:18:42.697804 2621 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:18:42.770784 systemd[1]: Started sshd@22-204.168.148.110:22-68.220.241.50:57894.service - OpenSSH per-connection server daemon (68.220.241.50:57894). Mar 14 00:18:43.022219 containerd[1511]: time="2026-03-14T00:18:43.022162610Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:18:43.039636 containerd[1511]: time="2026-03-14T00:18:43.039585876Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef\"" Mar 14 00:18:43.050145 containerd[1511]: time="2026-03-14T00:18:43.045218866Z" level=info msg="StartContainer for \"e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef\"" Mar 14 00:18:43.104858 systemd[1]: Started cri-containerd-e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef.scope - libcontainer container e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef. Mar 14 00:18:43.174045 containerd[1511]: time="2026-03-14T00:18:43.173863990Z" level=info msg="StartContainer for \"e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef\" returns successfully" Mar 14 00:18:43.184233 systemd[1]: cri-containerd-e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef.scope: Deactivated successfully. Mar 14 00:18:43.224601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef-rootfs.mount: Deactivated successfully. Mar 14 00:18:43.228991 containerd[1511]: time="2026-03-14T00:18:43.228917598Z" level=info msg="shim disconnected" id=e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef namespace=k8s.io Mar 14 00:18:43.229233 containerd[1511]: time="2026-03-14T00:18:43.229187592Z" level=warning msg="cleaning up after shim disconnected" id=e4872e5f56db5afd16fe291348f91675875e715d6577219751f0eeb8064bd2ef namespace=k8s.io Mar 14 00:18:43.229233 containerd[1511]: time="2026-03-14T00:18:43.229216961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:43.541981 sshd[4516]: Accepted publickey for core from 68.220.241.50 port 57894 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:43.545005 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:43.553842 systemd-logind[1490]: New session 23 of user core. Mar 14 00:18:43.560645 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:18:44.050279 containerd[1511]: time="2026-03-14T00:18:44.050206824Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:18:44.081055 containerd[1511]: time="2026-03-14T00:18:44.080680229Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788\"" Mar 14 00:18:44.081115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950775171.mount: Deactivated successfully. Mar 14 00:18:44.084500 containerd[1511]: time="2026-03-14T00:18:44.083629347Z" level=info msg="StartContainer for \"0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788\"" Mar 14 00:18:44.151089 systemd[1]: Started cri-containerd-0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788.scope - libcontainer container 0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788. Mar 14 00:18:44.194065 systemd[1]: cri-containerd-0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788.scope: Deactivated successfully. Mar 14 00:18:44.198173 containerd[1511]: time="2026-03-14T00:18:44.198094116Z" level=info msg="StartContainer for \"0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788\" returns successfully" Mar 14 00:18:44.236814 containerd[1511]: time="2026-03-14T00:18:44.236438838Z" level=info msg="shim disconnected" id=0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788 namespace=k8s.io Mar 14 00:18:44.236814 containerd[1511]: time="2026-03-14T00:18:44.236528886Z" level=warning msg="cleaning up after shim disconnected" id=0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788 namespace=k8s.io Mar 14 00:18:44.236814 containerd[1511]: time="2026-03-14T00:18:44.236545816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:45.044068 containerd[1511]: time="2026-03-14T00:18:45.044027160Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:18:45.067392 containerd[1511]: time="2026-03-14T00:18:45.067162714Z" level=info msg="CreateContainer within sandbox \"0f26c80ced6b1ad843b2ec66e55deed7ac51baf048e81f10d7e5d71b6c6d02da\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b\"" Mar 14 00:18:45.069622 containerd[1511]: time="2026-03-14T00:18:45.069546577Z" level=info msg="StartContainer for \"fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b\"" Mar 14 00:18:45.072100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e0d5c267ecfa8b91daa7308a6bba931adada1819baeea15a88b97eb730dc788-rootfs.mount: Deactivated successfully. Mar 14 00:18:45.109528 systemd[1]: Started cri-containerd-fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b.scope - libcontainer container fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b. Mar 14 00:18:45.156666 containerd[1511]: time="2026-03-14T00:18:45.156505948Z" level=info msg="StartContainer for \"fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b\" returns successfully" Mar 14 00:18:45.535404 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:18:46.065779 kubelet[2621]: I0314 00:18:46.065429 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4pb22" podStartSLOduration=5.065409953 podStartE2EDuration="5.065409953s" podCreationTimestamp="2026-03-14 00:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:18:46.064944124 +0000 UTC m=+138.529335219" watchObservedRunningTime="2026-03-14 00:18:46.065409953 +0000 UTC m=+138.529801048" Mar 14 00:18:46.193628 systemd[1]: run-containerd-runc-k8s.io-fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b-runc.DkgA3o.mount: Deactivated successfully. Mar 14 00:18:46.240332 kubelet[2621]: E0314 00:18:46.240291 2621 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39836->127.0.0.1:39371: write tcp 127.0.0.1:39836->127.0.0.1:39371: write: broken pipe Mar 14 00:18:48.346137 systemd[1]: run-containerd-runc-k8s.io-fae7591662c097c60b54c871c1ba431413a24cf2e25ae076432c8735833ad32b-runc.A8oZB7.mount: Deactivated successfully. Mar 14 00:18:48.752283 systemd-networkd[1401]: lxc_health: Link UP Mar 14 00:18:48.759175 systemd-networkd[1401]: lxc_health: Gained carrier Mar 14 00:18:50.795811 systemd-networkd[1401]: lxc_health: Gained IPv6LL Mar 14 00:18:54.868879 kubelet[2621]: E0314 00:18:54.868818 2621 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41712->127.0.0.1:39371: write tcp 127.0.0.1:41712->127.0.0.1:39371: write: broken pipe Mar 14 00:18:55.012907 sshd[4516]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:55.020464 systemd[1]: sshd@22-204.168.148.110:22-68.220.241.50:57894.service: Deactivated successfully. Mar 14 00:18:55.024825 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:18:55.027888 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:18:55.030490 systemd-logind[1490]: Removed session 23. Mar 14 00:19:11.435602 systemd[1]: cri-containerd-ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd.scope: Deactivated successfully. Mar 14 00:19:11.436019 systemd[1]: cri-containerd-ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd.scope: Consumed 4.566s CPU time, 18.0M memory peak, 0B memory swap peak. Mar 14 00:19:11.477696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd-rootfs.mount: Deactivated successfully. Mar 14 00:19:11.481419 containerd[1511]: time="2026-03-14T00:19:11.481300178Z" level=info msg="shim disconnected" id=ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd namespace=k8s.io Mar 14 00:19:11.481419 containerd[1511]: time="2026-03-14T00:19:11.481406897Z" level=warning msg="cleaning up after shim disconnected" id=ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd namespace=k8s.io Mar 14 00:19:11.482125 containerd[1511]: time="2026-03-14T00:19:11.481426086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:11.728550 kubelet[2621]: E0314 00:19:11.726553 2621 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45862->10.0.0.2:2379: read: connection timed out" Mar 14 00:19:12.111462 kubelet[2621]: I0314 00:19:12.111250 2621 scope.go:117] "RemoveContainer" containerID="ed01bfe9f0672cb677eba8deb5af06adf09644041629ceb85e4aa5d0bd2fa3bd" Mar 14 00:19:12.114090 containerd[1511]: time="2026-03-14T00:19:12.113938185Z" level=info msg="CreateContainer within sandbox \"d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:19:12.138422 containerd[1511]: time="2026-03-14T00:19:12.138102315Z" level=info msg="CreateContainer within sandbox \"d26abe2670dd1acb265ace1bb6c8736aae30ba3d821659c979f399e73eec6318\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"623f2c8ad2fadc9f61201110f718fa7ba3e57260e8178aad8ba7fe2028a3a2dc\"" Mar 14 00:19:12.139505 containerd[1511]: time="2026-03-14T00:19:12.139431154Z" level=info msg="StartContainer for \"623f2c8ad2fadc9f61201110f718fa7ba3e57260e8178aad8ba7fe2028a3a2dc\"" Mar 14 00:19:12.205676 systemd[1]: Started cri-containerd-623f2c8ad2fadc9f61201110f718fa7ba3e57260e8178aad8ba7fe2028a3a2dc.scope - libcontainer container 623f2c8ad2fadc9f61201110f718fa7ba3e57260e8178aad8ba7fe2028a3a2dc. Mar 14 00:19:12.287349 containerd[1511]: time="2026-03-14T00:19:12.287162681Z" level=info msg="StartContainer for \"623f2c8ad2fadc9f61201110f718fa7ba3e57260e8178aad8ba7fe2028a3a2dc\" returns successfully" Mar 14 00:19:13.901575 kubelet[2621]: E0314 00:19:13.900695 2621 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45514->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-8ea3e741de.189c8d25082821a4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-8ea3e741de,UID:22d85efd360838aa5e250374be4ff28b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8ea3e741de,},FirstTimestamp:2026-03-14 00:19:03.467286948 +0000 UTC m=+155.931678003,LastTimestamp:2026-03-14 00:19:03.467286948 +0000 UTC m=+155.931678003,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8ea3e741de,}" Mar 14 00:19:16.665036 systemd[1]: cri-containerd-038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5.scope: Deactivated successfully. Mar 14 00:19:16.665344 systemd[1]: cri-containerd-038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5.scope: Consumed 2.462s CPU time, 16.0M memory peak, 0B memory swap peak. Mar 14 00:19:16.695310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5-rootfs.mount: Deactivated successfully. Mar 14 00:19:16.701108 containerd[1511]: time="2026-03-14T00:19:16.700998879Z" level=info msg="shim disconnected" id=038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5 namespace=k8s.io Mar 14 00:19:16.702342 containerd[1511]: time="2026-03-14T00:19:16.701091467Z" level=warning msg="cleaning up after shim disconnected" id=038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5 namespace=k8s.io Mar 14 00:19:16.702342 containerd[1511]: time="2026-03-14T00:19:16.701130016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:19:17.131193 kubelet[2621]: I0314 00:19:17.131116 2621 scope.go:117] "RemoveContainer" containerID="038eac9ec53dd7f10c70b4d72006a1b5252cf9c09458294b782f890c24e057e5" Mar 14 00:19:17.133876 containerd[1511]: time="2026-03-14T00:19:17.133807317Z" level=info msg="CreateContainer within sandbox \"933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:19:17.155474 containerd[1511]: time="2026-03-14T00:19:17.155101702Z" level=info msg="CreateContainer within sandbox \"933769a4d50a937b7e38ecbd27b44568791eeb7a496e8d20f5124013573514a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d\"" Mar 14 00:19:17.156496 containerd[1511]: time="2026-03-14T00:19:17.156178166Z" level=info msg="StartContainer for \"605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d\"" Mar 14 00:19:17.156723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921654545.mount: Deactivated successfully. Mar 14 00:19:17.220640 systemd[1]: Started cri-containerd-605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d.scope - libcontainer container 605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d. Mar 14 00:19:17.295063 containerd[1511]: time="2026-03-14T00:19:17.294844467Z" level=info msg="StartContainer for \"605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d\" returns successfully" Mar 14 00:19:17.692150 systemd[1]: run-containerd-runc-k8s.io-605207aa478935b10cc3d21bbb07434894f736b22d7c97c4ddd39ac3b938377d-runc.iY6fFP.mount: Deactivated successfully. Mar 14 00:19:21.728063 kubelet[2621]: E0314 00:19:21.727723 2621 controller.go:195] "Failed to update lease" err="Put \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:19:27.620085 containerd[1511]: time="2026-03-14T00:19:27.619846172Z" level=info msg="StopPodSandbox for \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\"" Mar 14 00:19:27.620085 containerd[1511]: time="2026-03-14T00:19:27.619991610Z" level=info msg="TearDown network for sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" successfully" Mar 14 00:19:27.620085 containerd[1511]: time="2026-03-14T00:19:27.620010740Z" level=info msg="StopPodSandbox for \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" returns successfully" Mar 14 00:19:27.623406 containerd[1511]: time="2026-03-14T00:19:27.621662376Z" level=info msg="RemovePodSandbox for \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\"" Mar 14 00:19:27.623406 containerd[1511]: time="2026-03-14T00:19:27.621703947Z" level=info msg="Forcibly stopping sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\"" Mar 14 00:19:27.623406 containerd[1511]: time="2026-03-14T00:19:27.621800045Z" level=info msg="TearDown network for sandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" successfully" Mar 14 00:19:27.634709 containerd[1511]: time="2026-03-14T00:19:27.634414789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:19:27.634860 containerd[1511]: time="2026-03-14T00:19:27.634739464Z" level=info msg="RemovePodSandbox \"222d6ec9af758ea82bcd4f9ad6e36ddfd5f890b5c36242c257bc781b42b78a12\" returns successfully" Mar 14 00:19:27.635419 containerd[1511]: time="2026-03-14T00:19:27.635324507Z" level=info msg="StopPodSandbox for \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\"" Mar 14 00:19:27.635567 containerd[1511]: time="2026-03-14T00:19:27.635526984Z" level=info msg="TearDown network for sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" successfully" Mar 14 00:19:27.635640 containerd[1511]: time="2026-03-14T00:19:27.635563103Z" level=info msg="StopPodSandbox for \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" returns successfully" Mar 14 00:19:27.636154 containerd[1511]: time="2026-03-14T00:19:27.636116756Z" level=info msg="RemovePodSandbox for \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\"" Mar 14 00:19:27.636330 containerd[1511]: time="2026-03-14T00:19:27.636279294Z" level=info msg="Forcibly stopping sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\"" Mar 14 00:19:27.636480 containerd[1511]: time="2026-03-14T00:19:27.636423722Z" level=info msg="TearDown network for sandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" successfully" Mar 14 00:19:27.642097 containerd[1511]: time="2026-03-14T00:19:27.642016773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:19:27.642097 containerd[1511]: time="2026-03-14T00:19:27.642086862Z" level=info msg="RemovePodSandbox \"d771f14d41b334c6f85217eba0792cb92c5eb9e5d26dc11856f226a27d525d62\" returns successfully" Mar 14 00:19:31.730233 kubelet[2621]: E0314 00:19:31.728684 2621 controller.go:195] "Failed to update lease" err="Put \"https://204.168.148.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8ea3e741de?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"