Mar 14 00:16:14.060618 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:16:14.060640 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:16:14.060650 kernel: BIOS-provided physical RAM map: Mar 14 00:16:14.060656 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:16:14.060661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Mar 14 00:16:14.060666 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 14 00:16:14.060671 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 14 00:16:14.060682 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Mar 14 00:16:14.060687 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Mar 14 00:16:14.060692 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Mar 14 00:16:14.060697 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 14 00:16:14.060705 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 14 00:16:14.060710 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 14 00:16:14.060715 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 14 00:16:14.060721 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 14 00:16:14.060727 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:16:14.060735 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 14 00:16:14.060740 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 14 00:16:14.060745 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:16:14.060750 kernel: NX (Execute Disable) protection: active Mar 14 00:16:14.060755 kernel: APIC: Static calls initialized Mar 14 00:16:14.060761 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:16:14.060766 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Mar 14 00:16:14.060771 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 14 00:16:14.060776 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 14 00:16:14.060782 kernel: SMBIOS 3.0.0 present. Mar 14 00:16:14.060787 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 14 00:16:14.060792 kernel: Hypervisor detected: KVM Mar 14 00:16:14.060800 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:16:14.060805 kernel: kvm-clock: using sched offset of 12804596918 cycles Mar 14 00:16:14.060811 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:16:14.060816 kernel: tsc: Detected 2396.400 MHz processor Mar 14 00:16:14.060822 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:16:14.060827 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:16:14.060833 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Mar 14 00:16:14.060838 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:16:14.060844 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:16:14.060852 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 14 00:16:14.060857 kernel: Using GB pages for direct mapping Mar 14 00:16:14.060863 kernel: Secure boot disabled Mar 14 00:16:14.060872 kernel: ACPI: Early table checksum verification disabled Mar 14 00:16:14.060877 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 14 00:16:14.060883 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:16:14.060889 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060927 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060933 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 14 00:16:14.060938 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060944 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060950 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060955 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:16:14.060961 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:16:14.060969 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Mar 14 00:16:14.060975 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Mar 14 00:16:14.060981 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 14 00:16:14.060986 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Mar 14 00:16:14.060992 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Mar 14 00:16:14.060997 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Mar 14 00:16:14.061003 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Mar 14 00:16:14.061008 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Mar 14 00:16:14.061014 kernel: No NUMA configuration found Mar 14 00:16:14.061022 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Mar 14 00:16:14.061028 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Mar 14 00:16:14.061033 kernel: Zone ranges: Mar 14 00:16:14.061039 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:16:14.061044 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:16:14.061050 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:16:14.061055 kernel: Movable zone start for each node Mar 14 00:16:14.061061 kernel: Early memory node ranges Mar 14 00:16:14.061067 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:16:14.061072 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Mar 14 00:16:14.061081 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Mar 14 00:16:14.061086 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Mar 14 00:16:14.061092 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:16:14.061097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Mar 14 00:16:14.061103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:16:14.061109 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:16:14.061115 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 14 00:16:14.061120 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 14 00:16:14.061126 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Mar 14 00:16:14.061161 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:16:14.061167 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:16:14.061172 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:16:14.061178 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:16:14.061184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:16:14.061189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:16:14.061195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:16:14.061200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:16:14.061206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:16:14.061215 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:16:14.061220 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:16:14.061226 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:16:14.061232 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:16:14.061237 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 14 00:16:14.061243 kernel: Booting paravirtualized kernel on KVM Mar 14 00:16:14.061248 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:16:14.061254 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:16:14.061260 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:16:14.061268 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:16:14.061273 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:16:14.061279 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 14 00:16:14.061286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:16:14.061292 kernel: random: crng init done Mar 14 00:16:14.061297 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:16:14.061303 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:16:14.061309 kernel: Fallback order for Node 0: 0 Mar 14 00:16:14.061317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Mar 14 00:16:14.061323 kernel: Policy zone: Normal Mar 14 00:16:14.061329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:16:14.061335 kernel: software IO TLB: area num 2. Mar 14 00:16:14.061340 kernel: Memory: 3819396K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 271568K reserved, 0K cma-reserved) Mar 14 00:16:14.061346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:16:14.061352 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:16:14.061357 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:16:14.061363 kernel: Dynamic Preempt: voluntary Mar 14 00:16:14.061371 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:16:14.061378 kernel: rcu: RCU event tracing is enabled. Mar 14 00:16:14.061384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:16:14.061390 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:16:14.061403 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:16:14.061412 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:16:14.061417 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:16:14.061424 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:16:14.061429 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:16:14.061435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:16:14.061441 kernel: Console: colour dummy device 80x25 Mar 14 00:16:14.061447 kernel: printk: console [tty0] enabled Mar 14 00:16:14.061456 kernel: printk: console [ttyS0] enabled Mar 14 00:16:14.061461 kernel: ACPI: Core revision 20230628 Mar 14 00:16:14.061468 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:16:14.061473 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:16:14.061479 kernel: x2apic enabled Mar 14 00:16:14.061488 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:16:14.061494 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:16:14.061500 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:16:14.061505 kernel: Calibrating delay loop (skipped) preset value.. 4792.80 BogoMIPS (lpj=2396400) Mar 14 00:16:14.061511 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:16:14.061517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:16:14.061523 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:16:14.061529 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:16:14.061535 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 14 00:16:14.061544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:16:14.061550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:16:14.061555 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:16:14.061561 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Mar 14 00:16:14.061567 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:16:14.061573 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:16:14.061579 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:16:14.061585 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:16:14.061594 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:16:14.061600 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:16:14.061605 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:16:14.061611 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:16:14.061617 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:16:14.061623 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:16:14.061629 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:16:14.061635 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:16:14.061640 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:16:14.061649 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 14 00:16:14.061655 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 14 00:16:14.061661 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:16:14.061666 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:16:14.061672 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:16:14.061678 kernel: landlock: Up and running. Mar 14 00:16:14.061684 kernel: SELinux: Initializing. Mar 14 00:16:14.061690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:16:14.061696 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:16:14.061704 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Mar 14 00:16:14.061710 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:16:14.061716 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:16:14.061722 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:16:14.061728 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 14 00:16:14.061734 kernel: ... version: 0 Mar 14 00:16:14.061740 kernel: ... bit width: 48 Mar 14 00:16:14.061745 kernel: ... generic registers: 6 Mar 14 00:16:14.061751 kernel: ... value mask: 0000ffffffffffff Mar 14 00:16:14.061760 kernel: ... max period: 00007fffffffffff Mar 14 00:16:14.061766 kernel: ... fixed-purpose events: 0 Mar 14 00:16:14.061771 kernel: ... event mask: 000000000000003f Mar 14 00:16:14.061777 kernel: signal: max sigframe size: 3376 Mar 14 00:16:14.061783 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:16:14.061789 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:16:14.061795 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:16:14.061801 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:16:14.061806 kernel: .... node #0, CPUs: #1 Mar 14 00:16:14.061815 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:16:14.061821 kernel: smpboot: Max logical packages: 1 Mar 14 00:16:14.061827 kernel: smpboot: Total of 2 processors activated (9585.60 BogoMIPS) Mar 14 00:16:14.061832 kernel: devtmpfs: initialized Mar 14 00:16:14.061838 kernel: x86/mm: Memory block size: 128MB Mar 14 00:16:14.061844 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 14 00:16:14.061850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:16:14.061856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:16:14.061862 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:16:14.061868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:16:14.061876 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:16:14.061882 kernel: audit: type=2000 audit(1773447372.139:1): state=initialized audit_enabled=0 res=1 Mar 14 00:16:14.061888 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:16:14.061894 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:16:14.061910 kernel: cpuidle: using governor menu Mar 14 00:16:14.061916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:16:14.061922 kernel: dca service started, version 1.12.1 Mar 14 00:16:14.061927 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 14 00:16:14.061936 kernel: PCI: Using configuration type 1 for base access Mar 14 00:16:14.061942 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:16:14.061948 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:16:14.061955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:16:14.061961 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:16:14.061967 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:16:14.061973 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:16:14.061978 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:16:14.061984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:16:14.061993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:16:14.061999 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:16:14.062005 kernel: ACPI: Interpreter enabled Mar 14 00:16:14.062011 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:16:14.062016 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:16:14.062022 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:16:14.062028 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:16:14.062034 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:16:14.062040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:16:14.062268 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:16:14.062397 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:16:14.062508 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:16:14.062516 kernel: PCI host bridge to bus 0000:00 Mar 14 00:16:14.062630 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:16:14.062730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:16:14.062830 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:16:14.062943 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 14 00:16:14.063044 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 14 00:16:14.063167 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:16:14.063371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:16:14.063541 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:16:14.063701 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 14 00:16:14.063825 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Mar 14 00:16:14.063945 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Mar 14 00:16:14.064056 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Mar 14 00:16:14.064186 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:16:14.064298 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 14 00:16:14.064408 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:16:14.064528 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.064643 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Mar 14 00:16:14.064759 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.064868 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Mar 14 00:16:14.065027 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.065863 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Mar 14 00:16:14.066022 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.069500 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Mar 14 00:16:14.069653 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.069770 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Mar 14 00:16:14.069892 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.070022 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Mar 14 00:16:14.070185 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.070308 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Mar 14 00:16:14.070428 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.070544 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Mar 14 00:16:14.070664 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:16:14.070777 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Mar 14 00:16:14.070904 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:16:14.071023 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:16:14.071211 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:16:14.071330 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Mar 14 00:16:14.071444 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Mar 14 00:16:14.071567 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:16:14.071682 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Mar 14 00:16:14.071813 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:16:14.071977 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Mar 14 00:16:14.072098 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Mar 14 00:16:14.072231 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:16:14.072387 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:16:14.072515 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:16:14.072628 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:16:14.072757 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:16:14.072928 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Mar 14 00:16:14.073068 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:16:14.073862 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:16:14.074014 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:16:14.074131 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Mar 14 00:16:14.074278 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Mar 14 00:16:14.074398 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:16:14.074506 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:16:14.074614 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:16:14.074737 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:16:14.074851 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Mar 14 00:16:14.074972 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:16:14.075080 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:16:14.077329 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:16:14.077465 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Mar 14 00:16:14.077580 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Mar 14 00:16:14.077690 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:16:14.077798 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:16:14.077916 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:16:14.078038 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:16:14.078277 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Mar 14 00:16:14.078404 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Mar 14 00:16:14.078514 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:16:14.078623 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:16:14.078732 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:16:14.078739 kernel: acpiphp: Slot [0] registered Mar 14 00:16:14.078860 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:16:14.078988 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Mar 14 00:16:14.079106 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 14 00:16:14.079236 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:16:14.079352 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:16:14.079460 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:16:14.079568 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:16:14.079575 kernel: acpiphp: Slot [0-2] registered Mar 14 00:16:14.079684 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:16:14.079791 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:16:14.079911 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:16:14.079919 kernel: acpiphp: Slot [0-3] registered Mar 14 00:16:14.080028 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:16:14.083571 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:16:14.083719 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:16:14.083727 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:16:14.083734 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:16:14.083740 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:16:14.083747 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:16:14.083761 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:16:14.083767 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:16:14.083773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:16:14.083779 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:16:14.083785 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:16:14.083791 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:16:14.083797 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:16:14.083803 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:16:14.083810 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:16:14.083819 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:16:14.083825 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:16:14.083831 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:16:14.083837 kernel: iommu: Default domain type: Translated Mar 14 00:16:14.083844 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:16:14.083850 kernel: efivars: Registered efivars operations Mar 14 00:16:14.083856 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:16:14.083862 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:16:14.083869 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Mar 14 00:16:14.083878 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 14 00:16:14.083884 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Mar 14 00:16:14.083891 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Mar 14 00:16:14.084047 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:16:14.084170 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:16:14.084278 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:16:14.084285 kernel: vgaarb: loaded Mar 14 00:16:14.084291 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:16:14.084302 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:16:14.084308 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:16:14.084314 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:16:14.084320 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:16:14.084326 kernel: pnp: PnP ACPI init Mar 14 00:16:14.084447 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 14 00:16:14.084457 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:16:14.084463 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:16:14.084470 kernel: NET: Registered PF_INET protocol family Mar 14 00:16:14.084494 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:16:14.084504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:16:14.084513 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:16:14.084519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:16:14.084526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:16:14.084532 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:16:14.084538 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:16:14.084547 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:16:14.084556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:16:14.084563 kernel: NET: Registered PF_XDP protocol family Mar 14 00:16:14.084683 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:16:14.084800 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:16:14.084919 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:16:14.085030 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:16:14.085614 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:16:14.085743 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:16:14.085854 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:16:14.085976 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:16:14.086090 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Mar 14 00:16:14.086214 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:16:14.086326 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:16:14.086434 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:16:14.086546 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:16:14.086653 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:16:14.086764 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:16:14.086872 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:16:14.086990 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:16:14.087099 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:16:14.087999 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:16:14.088122 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:16:14.088249 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:16:14.088386 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:16:14.088510 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:16:14.088621 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:16:14.088729 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:16:14.088843 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Mar 14 00:16:14.088970 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:16:14.089079 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 14 00:16:14.089200 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:16:14.089308 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:16:14.089417 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:16:14.089526 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 14 00:16:14.089633 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:16:14.089742 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:16:14.089850 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:16:14.089974 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 14 00:16:14.090188 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:16:14.090301 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:16:14.090410 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:16:14.090510 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:16:14.090614 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:16:14.090712 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 14 00:16:14.090811 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 14 00:16:14.090929 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:16:14.091043 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Mar 14 00:16:14.091168 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:16:14.091282 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Mar 14 00:16:14.091399 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Mar 14 00:16:14.091504 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:16:14.091624 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:16:14.091736 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Mar 14 00:16:14.091841 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:16:14.091968 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Mar 14 00:16:14.092078 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:16:14.095655 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 14 00:16:14.095776 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Mar 14 00:16:14.095883 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:16:14.096013 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 14 00:16:14.096119 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Mar 14 00:16:14.096245 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:16:14.096360 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 14 00:16:14.096465 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Mar 14 00:16:14.096572 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:16:14.096581 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:16:14.096591 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:16:14.096598 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:16:14.096605 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Mar 14 00:16:14.096614 kernel: Initialise system trusted keyrings Mar 14 00:16:14.096621 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:16:14.096627 kernel: Key type asymmetric registered Mar 14 00:16:14.096634 kernel: Asymmetric key parser 'x509' registered Mar 14 00:16:14.096640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:16:14.096646 kernel: io scheduler mq-deadline registered Mar 14 00:16:14.096653 kernel: io scheduler kyber registered Mar 14 00:16:14.096659 kernel: io scheduler bfq registered Mar 14 00:16:14.096774 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 14 00:16:14.096889 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 14 00:16:14.097011 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 14 00:16:14.097121 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 14 00:16:14.097244 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 14 00:16:14.097355 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 14 00:16:14.097464 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 14 00:16:14.097572 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 14 00:16:14.097681 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 14 00:16:14.097798 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 14 00:16:14.097917 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 14 00:16:14.098026 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 14 00:16:14.098146 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 14 00:16:14.098256 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 14 00:16:14.098365 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 14 00:16:14.098473 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 14 00:16:14.098481 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:16:14.098589 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 14 00:16:14.098701 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 14 00:16:14.098708 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:16:14.098715 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 14 00:16:14.098722 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:16:14.098728 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:16:14.098735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:16:14.098741 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:16:14.098748 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:16:14.098758 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:16:14.098873 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:16:14.098986 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:16:14.099090 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:16:13 UTC (1773447373) Mar 14 00:16:14.099210 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:16:14.099219 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:16:14.099226 kernel: efifb: probing for efifb Mar 14 00:16:14.099232 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Mar 14 00:16:14.099242 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 14 00:16:14.099249 kernel: efifb: scrolling: redraw Mar 14 00:16:14.099255 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:16:14.099262 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:16:14.099268 kernel: fb0: EFI VGA frame buffer device Mar 14 00:16:14.099274 kernel: pstore: Using crash dump compression: deflate Mar 14 00:16:14.099281 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:16:14.099287 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:16:14.099293 kernel: Segment Routing with IPv6 Mar 14 00:16:14.099303 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:16:14.099309 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:16:14.099315 kernel: Key type dns_resolver registered Mar 14 00:16:14.099322 kernel: IPI shorthand broadcast: enabled Mar 14 00:16:14.099328 kernel: sched_clock: Marking stable (1317017646, 269313240)->(1773085838, -186754952) Mar 14 00:16:14.099337 kernel: registered taskstats version 1 Mar 14 00:16:14.099343 kernel: Loading compiled-in X.509 certificates Mar 14 00:16:14.099350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:16:14.099356 kernel: Key type .fscrypt registered Mar 14 00:16:14.099365 kernel: Key type fscrypt-provisioning registered Mar 14 00:16:14.099372 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:16:14.099378 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:16:14.099384 kernel: ima: No architecture policies found Mar 14 00:16:14.099390 kernel: clk: Disabling unused clocks Mar 14 00:16:14.099397 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:16:14.099403 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:16:14.099409 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:16:14.099416 kernel: Run /init as init process Mar 14 00:16:14.099425 kernel: with arguments: Mar 14 00:16:14.099432 kernel: /init Mar 14 00:16:14.099438 kernel: with environment: Mar 14 00:16:14.099444 kernel: HOME=/ Mar 14 00:16:14.099451 kernel: TERM=linux Mar 14 00:16:14.099459 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:16:14.099468 systemd[1]: Detected virtualization kvm. Mar 14 00:16:14.099478 systemd[1]: Detected architecture x86-64. Mar 14 00:16:14.099485 systemd[1]: Running in initrd. Mar 14 00:16:14.099491 systemd[1]: No hostname configured, using default hostname. Mar 14 00:16:14.099498 systemd[1]: Hostname set to . Mar 14 00:16:14.099505 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:16:14.099512 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:16:14.099519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:16:14.099533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:16:14.099543 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:16:14.099550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:16:14.099572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:16:14.099584 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:16:14.099597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:16:14.099610 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:16:14.099617 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:16:14.099637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:16:14.099644 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:16:14.099651 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:16:14.099658 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:16:14.099665 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:16:14.099671 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:16:14.099678 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:16:14.099685 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:16:14.099692 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:16:14.099701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:16:14.099708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:16:14.099715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:16:14.099722 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:16:14.099728 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:16:14.099735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:16:14.099741 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:16:14.099748 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:16:14.099757 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:16:14.099764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:16:14.099771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:14.099801 systemd-journald[188]: Collecting audit messages is disabled. Mar 14 00:16:14.099821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:16:14.099828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:16:14.099835 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:16:14.099842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:14.099849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:16:14.099860 systemd-journald[188]: Journal started Mar 14 00:16:14.099875 systemd-journald[188]: Runtime Journal (/run/log/journal/e61279a29f96416aa86eca641030f303) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:16:14.063783 systemd-modules-load[189]: Inserted module 'overlay' Mar 14 00:16:14.112668 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:16:14.112749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:16:14.118294 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 14 00:16:14.120060 kernel: Bridge firewalling registered Mar 14 00:16:14.125247 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:16:14.127287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:14.129032 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:14.135097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:14.141366 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:16:14.145323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:14.154291 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:16:14.159252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:16:14.164163 dracut-cmdline[211]: dracut-dracut-053 Mar 14 00:16:14.169754 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:16:14.172587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:14.174337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:14.184268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:14.191363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:16:14.222517 systemd-resolved[245]: Positive Trust Anchors: Mar 14 00:16:14.222555 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:16:14.222594 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:16:14.227382 systemd-resolved[245]: Defaulting to hostname 'linux'. Mar 14 00:16:14.228935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:16:14.230691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:14.273209 kernel: SCSI subsystem initialized Mar 14 00:16:14.284180 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:16:14.295213 kernel: iscsi: registered transport (tcp) Mar 14 00:16:14.318542 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:16:14.318637 kernel: QLogic iSCSI HBA Driver Mar 14 00:16:14.384736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:16:14.393449 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:16:14.429710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:16:14.429804 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:16:14.430554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:16:14.482232 kernel: raid6: avx512x4 gen() 31993 MB/s Mar 14 00:16:14.502210 kernel: raid6: avx512x2 gen() 31472 MB/s Mar 14 00:16:14.520218 kernel: raid6: avx512x1 gen() 28787 MB/s Mar 14 00:16:14.538221 kernel: raid6: avx2x4 gen() 24581 MB/s Mar 14 00:16:14.556189 kernel: raid6: avx2x2 gen() 24518 MB/s Mar 14 00:16:14.574624 kernel: raid6: avx2x1 gen() 20006 MB/s Mar 14 00:16:14.574710 kernel: raid6: using algorithm avx512x4 gen() 31993 MB/s Mar 14 00:16:14.597442 kernel: raid6: .... xor() 4023 MB/s, rmw enabled Mar 14 00:16:14.597530 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:16:14.622182 kernel: xor: automatically using best checksumming function avx Mar 14 00:16:14.779160 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:16:14.792235 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:16:14.798415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:14.824311 systemd-udevd[409]: Using default interface naming scheme 'v255'. Mar 14 00:16:14.830339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:14.840356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:16:14.857203 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Mar 14 00:16:14.894498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:16:14.905476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:16:15.014950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:15.024473 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:16:15.043015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:16:15.045915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:16:15.048515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:15.049765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:16:15.056412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:16:15.086870 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:16:15.154175 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:16:15.162024 kernel: libata version 3.00 loaded. Mar 14 00:16:15.169174 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:16:15.173168 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:16:15.185172 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:16:15.191641 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:16:15.191743 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:16:15.191401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:16:15.202964 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:16:15.191613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:15.198560 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:16:15.199104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:15.199386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:15.199974 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:15.213637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:15.226842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:15.228245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:15.241341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:15.251540 kernel: scsi host1: ahci Mar 14 00:16:15.254178 kernel: ACPI: bus type USB registered Mar 14 00:16:15.261167 kernel: scsi host2: ahci Mar 14 00:16:15.266179 kernel: usbcore: registered new interface driver usbfs Mar 14 00:16:15.278188 kernel: usbcore: registered new interface driver hub Mar 14 00:16:15.284599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:15.329205 kernel: usbcore: registered new device driver usb Mar 14 00:16:15.329248 kernel: scsi host3: ahci Mar 14 00:16:15.329535 kernel: scsi host4: ahci Mar 14 00:16:15.329727 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:16:15.329741 kernel: AES CTR mode by8 optimization enabled Mar 14 00:16:15.329754 kernel: scsi host5: ahci Mar 14 00:16:15.329953 kernel: scsi host6: ahci Mar 14 00:16:15.330130 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Mar 14 00:16:15.330162 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Mar 14 00:16:15.330175 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Mar 14 00:16:15.330193 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Mar 14 00:16:15.330207 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Mar 14 00:16:15.330220 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Mar 14 00:16:15.297640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:16:15.349599 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:15.625209 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:16:15.625349 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:16:15.625425 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:16:15.634217 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:16:15.634344 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:16:15.641873 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:16:15.641982 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:16:15.646069 kernel: ata1.00: applying bridge limits Mar 14 00:16:15.646238 kernel: ata1.00: configured for UDMA/100 Mar 14 00:16:15.655203 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:16:15.697273 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:16:15.697770 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:16:15.698037 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 14 00:16:15.700506 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:16:15.700854 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Mar 14 00:16:15.709551 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:16:15.709940 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:16:15.715041 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:16:15.715353 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 14 00:16:15.715563 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:16:15.721649 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:16:15.721990 kernel: hub 1-0:1.0: USB hub found Mar 14 00:16:15.723398 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:16:15.731399 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:16:15.731654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:16:15.731666 kernel: hub 2-0:1.0: USB hub found Mar 14 00:16:15.731813 kernel: GPT:17805311 != 160006143 Mar 14 00:16:15.731822 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:16:15.731966 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:16:15.739580 kernel: GPT:17805311 != 160006143 Mar 14 00:16:15.742655 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:16:15.742710 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:16:15.748316 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:16:15.766543 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:16:15.766870 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:16:15.779177 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:16:15.790198 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Mar 14 00:16:15.803981 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:16:15.807244 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (461) Mar 14 00:16:15.811609 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:16:15.823638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:16:15.828228 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:16:15.829014 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:16:15.836360 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:16:15.855717 disk-uuid[583]: Primary Header is updated. Mar 14 00:16:15.855717 disk-uuid[583]: Secondary Entries is updated. Mar 14 00:16:15.855717 disk-uuid[583]: Secondary Header is updated. Mar 14 00:16:15.862175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:16:15.873185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:16:15.971754 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:16:16.113167 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:16:16.120364 kernel: usbcore: registered new interface driver usbhid Mar 14 00:16:16.120454 kernel: usbhid: USB HID core driver Mar 14 00:16:16.128175 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 14 00:16:16.128246 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:16:16.881281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:16:16.884920 disk-uuid[585]: The operation has completed successfully. Mar 14 00:16:16.934947 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:16:16.935086 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:16:16.951306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:16:16.956300 sh[602]: Success Mar 14 00:16:16.969171 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:16:17.008287 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:16:17.027937 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:16:17.028840 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:16:17.055083 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:16:17.055164 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:17.055183 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:16:17.058340 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:16:17.062093 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:16:17.072214 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:16:17.074485 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:16:17.075597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:16:17.082320 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:16:17.085347 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:16:17.102857 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:17.102934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:17.102962 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:16:17.112198 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:16:17.112270 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:16:17.127568 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:16:17.130522 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:17.138166 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:16:17.143357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:16:17.216064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:16:17.219948 ignition[699]: Ignition 2.19.0 Mar 14 00:16:17.219958 ignition[699]: Stage: fetch-offline Mar 14 00:16:17.219991 ignition[699]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:17.220000 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:17.220079 ignition[699]: parsed url from cmdline: "" Mar 14 00:16:17.220083 ignition[699]: no config URL provided Mar 14 00:16:17.220088 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:16:17.220096 ignition[699]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:16:17.220101 ignition[699]: failed to fetch config: resource requires networking Mar 14 00:16:17.220290 ignition[699]: Ignition finished successfully Mar 14 00:16:17.225324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:16:17.225877 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:16:17.245815 systemd-networkd[787]: lo: Link UP Mar 14 00:16:17.245825 systemd-networkd[787]: lo: Gained carrier Mar 14 00:16:17.248440 systemd-networkd[787]: Enumeration completed Mar 14 00:16:17.248646 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:16:17.249185 systemd[1]: Reached target network.target - Network. Mar 14 00:16:17.249498 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:17.249502 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:17.250583 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:17.250587 systemd-networkd[787]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:17.252514 systemd-networkd[787]: eth0: Link UP Mar 14 00:16:17.252518 systemd-networkd[787]: eth0: Gained carrier Mar 14 00:16:17.252526 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:17.256254 systemd-networkd[787]: eth1: Link UP Mar 14 00:16:17.256259 systemd-networkd[787]: eth1: Gained carrier Mar 14 00:16:17.256266 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:17.256559 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:16:17.276705 ignition[790]: Ignition 2.19.0 Mar 14 00:16:17.277948 ignition[790]: Stage: fetch Mar 14 00:16:17.278310 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:17.278334 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:17.278503 ignition[790]: parsed url from cmdline: "" Mar 14 00:16:17.278514 ignition[790]: no config URL provided Mar 14 00:16:17.278527 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:16:17.278547 ignition[790]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:16:17.278581 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:16:17.278843 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:16:17.295242 systemd-networkd[787]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:16:17.316242 systemd-networkd[787]: eth0: DHCPv4 address 204.168.138.0/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:16:17.479237 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:16:17.485839 ignition[790]: GET result: OK Mar 14 00:16:17.485973 ignition[790]: parsing config with SHA512: 64eade7088daa00f9a019fa416dac3212f233aeed97c2170194eee258278ce746ea903440d0ceeca2f3a79c14666ea673221bb7cedac2d91bbc80e75e412cb50 Mar 14 00:16:17.491833 unknown[790]: fetched base config from "system" Mar 14 00:16:17.491858 unknown[790]: fetched base config from "system" Mar 14 00:16:17.493095 ignition[790]: fetch: fetch complete Mar 14 00:16:17.491891 unknown[790]: fetched user config from "hetzner" Mar 14 00:16:17.493107 ignition[790]: fetch: fetch passed Mar 14 00:16:17.493438 ignition[790]: Ignition finished successfully Mar 14 00:16:17.498386 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:16:17.506486 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:16:17.539634 ignition[798]: Ignition 2.19.0 Mar 14 00:16:17.540231 ignition[798]: Stage: kargs Mar 14 00:16:17.540712 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:17.540738 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:17.542328 ignition[798]: kargs: kargs passed Mar 14 00:16:17.542481 ignition[798]: Ignition finished successfully Mar 14 00:16:17.545888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:16:17.553347 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:16:17.587731 ignition[805]: Ignition 2.19.0 Mar 14 00:16:17.587749 ignition[805]: Stage: disks Mar 14 00:16:17.588014 ignition[805]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:17.588033 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:17.589087 ignition[805]: disks: disks passed Mar 14 00:16:17.589180 ignition[805]: Ignition finished successfully Mar 14 00:16:17.592594 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:16:17.594349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:16:17.595355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:16:17.596765 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:16:17.598108 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:16:17.599612 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:16:17.607417 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:16:17.636024 systemd-fsck[814]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:16:17.638085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:16:17.646251 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:16:17.725166 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:16:17.725386 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:16:17.726385 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:16:17.733218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:16:17.736228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:16:17.738287 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:16:17.738681 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:16:17.738706 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:16:17.747593 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:16:17.768251 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (822) Mar 14 00:16:17.768274 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:17.768284 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:17.768293 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:16:17.768301 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:16:17.768310 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:16:17.773404 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:16:17.776836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:16:17.812077 coreos-metadata[824]: Mar 14 00:16:17.812 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:16:17.813260 coreos-metadata[824]: Mar 14 00:16:17.813 INFO Fetch successful Mar 14 00:16:17.815480 coreos-metadata[824]: Mar 14 00:16:17.815 INFO wrote hostname ci-4081-3-6-n-968d08e397 to /sysroot/etc/hostname Mar 14 00:16:17.817037 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:16:17.818593 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:16:17.823935 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:16:17.829529 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:16:17.833796 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:16:17.925971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:16:17.931216 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:16:17.934265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:16:17.946193 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:17.958557 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:16:17.987361 ignition[939]: INFO : Ignition 2.19.0 Mar 14 00:16:17.987361 ignition[939]: INFO : Stage: mount Mar 14 00:16:17.988505 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:17.988505 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:17.988505 ignition[939]: INFO : mount: mount passed Mar 14 00:16:17.988505 ignition[939]: INFO : Ignition finished successfully Mar 14 00:16:17.991280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:16:17.997261 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:16:18.051049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:16:18.058424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:16:18.076192 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Mar 14 00:16:18.086622 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:16:18.086659 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:16:18.086680 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:16:18.095242 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:16:18.095295 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:16:18.099472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:16:18.122790 ignition[968]: INFO : Ignition 2.19.0 Mar 14 00:16:18.123493 ignition[968]: INFO : Stage: files Mar 14 00:16:18.124065 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:18.124509 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:18.125524 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:16:18.126838 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:16:18.127387 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:16:18.130964 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:16:18.131335 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:16:18.131840 unknown[968]: wrote ssh authorized keys file for user: core Mar 14 00:16:18.132382 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:16:18.134390 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:16:18.134739 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:16:18.134739 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:16:18.134739 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:16:18.333357 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:16:18.517349 systemd-networkd[787]: eth1: Gained IPv6LL Mar 14 00:16:18.643925 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:16:18.643925 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:16:18.646980 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:16:18.763567 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 14 00:16:18.880536 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:16:18.881092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:16:18.881092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:16:18.881092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:16:18.881092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:16:18.881092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:16:18.883408 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:16:19.222210 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 14 00:16:19.285560 systemd-networkd[787]: eth0: Gained IPv6LL Mar 14 00:16:19.473607 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:16:19.473607 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 14 00:16:19.475602 ignition[968]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:16:19.489236 ignition[968]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:16:19.489236 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:16:19.489236 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:16:19.489236 ignition[968]: INFO : files: files passed Mar 14 00:16:19.489236 ignition[968]: INFO : Ignition finished successfully Mar 14 00:16:19.478832 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:16:19.485369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:16:19.491371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:16:19.495503 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:16:19.495649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:16:19.508576 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:19.508576 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:19.511007 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:16:19.513550 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:16:19.514995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:16:19.521351 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:16:19.544579 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:16:19.544736 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:16:19.546235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:16:19.546997 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:16:19.548090 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:16:19.549288 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:16:19.571789 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:16:19.578285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:16:19.591870 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:19.592707 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:19.593778 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:16:19.594763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:16:19.594941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:16:19.596202 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:16:19.597173 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:16:19.598100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:16:19.599042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:16:19.599989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:16:19.600934 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:16:19.601864 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:16:19.602872 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:16:19.603807 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:16:19.604747 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:16:19.605656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:16:19.605797 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:16:19.607043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:16:19.608005 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:16:19.608874 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:16:19.609001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:16:19.609785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:16:19.609924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:16:19.611132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:16:19.611294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:16:19.612129 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:16:19.612287 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:16:19.612901 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:16:19.613027 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:16:19.629621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:16:19.633366 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:16:19.634031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:16:19.634256 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:19.635028 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:16:19.635369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:16:19.643088 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:16:19.643224 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:16:19.654508 ignition[1022]: INFO : Ignition 2.19.0 Mar 14 00:16:19.656638 ignition[1022]: INFO : Stage: umount Mar 14 00:16:19.656638 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:16:19.656638 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:16:19.656638 ignition[1022]: INFO : umount: umount passed Mar 14 00:16:19.656638 ignition[1022]: INFO : Ignition finished successfully Mar 14 00:16:19.660647 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:16:19.660804 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:16:19.662380 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:16:19.662450 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:16:19.663414 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:16:19.663471 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:16:19.664023 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:16:19.664076 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:16:19.664634 systemd[1]: Stopped target network.target - Network. Mar 14 00:16:19.665100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:16:19.666746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:16:19.667794 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:16:19.668799 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:16:19.672242 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:16:19.672890 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:16:19.674196 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:16:19.674803 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:16:19.674870 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:16:19.676253 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:16:19.676302 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:16:19.676837 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:16:19.676913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:16:19.678373 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:16:19.678475 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:16:19.680757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:16:19.681781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:16:19.687248 systemd-networkd[787]: eth0: DHCPv6 lease lost Mar 14 00:16:19.689165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:16:19.692982 systemd-networkd[787]: eth1: DHCPv6 lease lost Mar 14 00:16:19.694689 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:16:19.694867 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:16:19.699035 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:16:19.699299 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:16:19.704783 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:16:19.704879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:16:19.713287 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:16:19.713839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:16:19.713940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:16:19.714868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:16:19.714937 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:19.716773 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:16:19.716835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:19.717978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:16:19.718038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:19.721711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:19.722791 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:16:19.722935 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:16:19.731933 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:16:19.732059 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:16:19.739396 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:16:19.739560 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:16:19.741658 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:16:19.741885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:19.743578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:16:19.743662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:16:19.744649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:16:19.744701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:16:19.745496 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:16:19.745561 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:16:19.746936 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:16:19.746996 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:16:19.748228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:16:19.748295 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:16:19.754352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:16:19.755784 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:16:19.756268 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:19.757734 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:16:19.757803 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:19.758427 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:16:19.758486 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:16:19.759080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:19.760414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:19.764071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:16:19.764786 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:16:19.766672 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:16:19.772402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:16:19.784572 systemd[1]: Switching root. Mar 14 00:16:19.828216 systemd-journald[188]: Journal stopped Mar 14 00:16:21.009036 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 14 00:16:21.009157 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:16:21.009178 kernel: SELinux: policy capability open_perms=1 Mar 14 00:16:21.009188 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:16:21.009196 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:16:21.009211 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:16:21.009220 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:16:21.009228 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:16:21.009237 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:16:21.009248 kernel: audit: type=1403 audit(1773447379.991:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:16:21.009258 systemd[1]: Successfully loaded SELinux policy in 50.041ms. Mar 14 00:16:21.009284 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.673ms. Mar 14 00:16:21.009294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:16:21.009305 systemd[1]: Detected virtualization kvm. Mar 14 00:16:21.009314 systemd[1]: Detected architecture x86-64. Mar 14 00:16:21.009323 systemd[1]: Detected first boot. Mar 14 00:16:21.009332 systemd[1]: Hostname set to . Mar 14 00:16:21.009345 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:16:21.009354 zram_generator::config[1083]: No configuration found. Mar 14 00:16:21.009369 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:16:21.009378 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:16:21.009388 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:16:21.009398 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:16:21.009408 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:16:21.009417 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:16:21.009428 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:16:21.009438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:16:21.009447 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:16:21.009456 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:16:21.009465 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:16:21.009475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:16:21.009484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:16:21.009493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:16:21.009503 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:16:21.009515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:16:21.009524 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:16:21.009534 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:16:21.009543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:16:21.009552 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:16:21.009561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:16:21.009571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:16:21.009582 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:16:21.009592 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:16:21.009601 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:16:21.009610 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:16:21.009619 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:16:21.009628 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:16:21.009637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:16:21.009646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:16:21.009655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:16:21.009667 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:16:21.009676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:16:21.009685 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:16:21.009694 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:16:21.009703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:21.009713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:16:21.009722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:16:21.009731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:16:21.009740 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:16:21.009752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:21.009760 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:16:21.009770 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:16:21.009779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:16:21.009788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:16:21.009799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:16:21.009808 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:16:21.009820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:16:21.009836 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:16:21.009845 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:16:21.009855 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:16:21.009864 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:16:21.009873 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:16:21.009882 kernel: ACPI: bus type drm_connector registered Mar 14 00:16:21.009891 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:16:21.009936 systemd-journald[1179]: Collecting audit messages is disabled. Mar 14 00:16:21.009963 kernel: fuse: init (API version 7.39) Mar 14 00:16:21.009973 kernel: loop: module loaded Mar 14 00:16:21.009983 systemd-journald[1179]: Journal started Mar 14 00:16:21.010002 systemd-journald[1179]: Runtime Journal (/run/log/journal/e61279a29f96416aa86eca641030f303) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:16:21.018024 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:16:21.028653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:16:21.028698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:21.038168 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:16:21.039724 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:16:21.040767 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:16:21.041573 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:16:21.042366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:16:21.043114 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:16:21.043906 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:16:21.045062 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:16:21.046131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:16:21.047392 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:16:21.047617 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:16:21.048680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:16:21.048963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:16:21.050101 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:16:21.050586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:16:21.051653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:16:21.051870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:16:21.052959 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:16:21.053257 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:16:21.054633 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:16:21.054955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:16:21.055999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:16:21.057066 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:16:21.058103 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:16:21.074974 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:16:21.081273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:16:21.091041 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:16:21.092223 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:16:21.098380 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:16:21.110326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:16:21.110954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:16:21.116899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:16:21.118332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:16:21.120816 systemd-journald[1179]: Time spent on flushing to /var/log/journal/e61279a29f96416aa86eca641030f303 is 76.100ms for 1168 entries. Mar 14 00:16:21.120816 systemd-journald[1179]: System Journal (/var/log/journal/e61279a29f96416aa86eca641030f303) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:16:21.214868 systemd-journald[1179]: Received client request to flush runtime journal. Mar 14 00:16:21.129381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:16:21.138440 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:16:21.145093 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:16:21.147311 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:16:21.180766 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:16:21.182080 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:16:21.212670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:16:21.229880 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:16:21.255288 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Mar 14 00:16:21.255717 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Mar 14 00:16:21.270817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:16:21.282752 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:16:21.284760 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:16:21.295336 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:16:21.325823 udevadm[1242]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:16:21.330737 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:16:21.342419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:16:21.363786 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 14 00:16:21.364160 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 14 00:16:21.372787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:16:21.556352 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:16:21.563334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:16:21.596950 systemd-udevd[1252]: Using default interface naming scheme 'v255'. Mar 14 00:16:21.617195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:16:21.627436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:16:21.648327 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:16:21.690465 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 14 00:16:21.701759 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:16:21.803939 systemd-networkd[1258]: lo: Link UP Mar 14 00:16:21.803955 systemd-networkd[1258]: lo: Gained carrier Mar 14 00:16:21.809781 systemd-networkd[1258]: Enumeration completed Mar 14 00:16:21.810293 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:16:21.812622 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.813533 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:21.818327 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.818385 systemd-networkd[1258]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:16:21.820536 systemd-networkd[1258]: eth0: Link UP Mar 14 00:16:21.820693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:16:21.821199 systemd-networkd[1258]: eth0: Gained carrier Mar 14 00:16:21.821261 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.828035 systemd-networkd[1258]: eth1: Link UP Mar 14 00:16:21.829185 systemd-networkd[1258]: eth1: Gained carrier Mar 14 00:16:21.829206 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.844160 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:16:21.858035 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.859211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:16:21.861517 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:16:21.867206 systemd-networkd[1258]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:16:21.869244 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:16:21.878203 systemd-networkd[1258]: eth0: DHCPv4 address 204.168.138.0/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:16:21.880784 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:16:21.882245 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Mar 14 00:16:21.882317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:21.882454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:21.889601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:16:21.893060 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1266) Mar 14 00:16:21.894425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:16:21.905315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:16:21.906095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:16:21.906156 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:16:21.906194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:21.906602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:16:21.906806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:16:21.913527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:16:21.913710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:16:21.922776 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:16:21.924342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:16:21.930846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:16:21.930904 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:16:21.979179 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 14 00:16:21.979474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:16:21.979487 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:16:21.986306 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:16:21.986593 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:16:21.995153 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:16:21.993368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:22.004170 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 14 00:16:22.010204 kernel: Console: switching to colour dummy device 80x25 Mar 14 00:16:22.015434 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 14 00:16:22.016405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:22.016691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:22.021536 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:16:22.021565 kernel: [drm] features: -context_init Mar 14 00:16:22.025982 kernel: [drm] number of scanouts: 1 Mar 14 00:16:22.026063 kernel: [drm] number of cap sets: 0 Mar 14 00:16:22.029355 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:16:22.030280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:22.034338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:16:22.040176 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 14 00:16:22.040239 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:16:22.051164 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:16:22.053962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:16:22.054295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:22.066391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:16:22.124635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:16:22.134990 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:16:22.143366 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:16:22.155345 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:16:22.186570 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:16:22.189310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:16:22.195400 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:16:22.200573 lvm[1331]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:16:22.236645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:16:22.238661 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:16:22.239120 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:16:22.239164 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:16:22.239248 systemd[1]: Reached target machines.target - Containers. Mar 14 00:16:22.240905 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:16:22.246288 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:16:22.248270 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:16:22.249289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:22.251302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:16:22.262555 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:16:22.267806 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:16:22.270217 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:16:22.284129 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:16:22.292260 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:16:22.293574 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:16:22.303165 kernel: loop0: detected capacity change from 0 to 8 Mar 14 00:16:22.316118 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:16:22.339179 kernel: loop1: detected capacity change from 0 to 228704 Mar 14 00:16:22.377173 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:16:22.417124 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:16:22.465309 kernel: loop4: detected capacity change from 0 to 8 Mar 14 00:16:22.472166 kernel: loop5: detected capacity change from 0 to 228704 Mar 14 00:16:22.494179 kernel: loop6: detected capacity change from 0 to 142488 Mar 14 00:16:22.516233 kernel: loop7: detected capacity change from 0 to 140768 Mar 14 00:16:22.536643 (sd-merge)[1352]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:16:22.538910 (sd-merge)[1352]: Merged extensions into '/usr'. Mar 14 00:16:22.544111 systemd[1]: Reloading requested from client PID 1339 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:16:22.544129 systemd[1]: Reloading... Mar 14 00:16:22.633409 zram_generator::config[1380]: No configuration found. Mar 14 00:16:22.690907 ldconfig[1335]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:16:22.752963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:22.812061 systemd[1]: Reloading finished in 267 ms. Mar 14 00:16:22.830199 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:16:22.835217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:16:22.844301 systemd[1]: Starting ensure-sysext.service... Mar 14 00:16:22.853255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:16:22.858728 systemd[1]: Reloading requested from client PID 1430 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:16:22.858862 systemd[1]: Reloading... Mar 14 00:16:22.875397 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:16:22.875799 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:16:22.879444 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:16:22.880087 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Mar 14 00:16:22.880285 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Mar 14 00:16:22.886589 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:16:22.886737 systemd-tmpfiles[1431]: Skipping /boot Mar 14 00:16:22.903285 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:16:22.905031 systemd-tmpfiles[1431]: Skipping /boot Mar 14 00:16:22.952175 zram_generator::config[1473]: No configuration found. Mar 14 00:16:23.059707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:23.137388 systemd[1]: Reloading finished in 277 ms. Mar 14 00:16:23.157118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:16:23.175339 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:23.186322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:16:23.195019 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:16:23.207422 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:16:23.220367 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:16:23.230919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.231114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:23.236408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:16:23.239493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:16:23.254762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:16:23.258680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:23.260918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.269368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.269619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:23.269864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:23.269996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.277452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.277682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:16:23.294684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:16:23.296305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:16:23.296425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:16:23.302472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:16:23.311738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:16:23.312014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:16:23.313502 augenrules[1540]: No rules Mar 14 00:16:23.313935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:16:23.314290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:16:23.317646 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:23.319728 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:16:23.319956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:16:23.324329 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:16:23.328432 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:16:23.328635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:16:23.340613 systemd[1]: Finished ensure-sysext.service. Mar 14 00:16:23.355173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:16:23.355272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:16:23.367350 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:16:23.379987 systemd-resolved[1515]: Positive Trust Anchors: Mar 14 00:16:23.380334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:16:23.380627 systemd-resolved[1515]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:16:23.380681 systemd-resolved[1515]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:16:23.392420 systemd-resolved[1515]: Using system hostname 'ci-4081-3-6-n-968d08e397'. Mar 14 00:16:23.397871 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:16:23.399688 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:16:23.401118 systemd[1]: Reached target network.target - Network. Mar 14 00:16:23.404196 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:16:23.408474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:16:23.414307 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:16:23.448403 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:16:23.449418 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:16:23.449914 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:16:23.451906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:16:23.452424 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:16:23.452783 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:16:23.452803 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:16:23.453128 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:16:23.453670 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:16:23.454094 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:16:23.456503 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:16:23.458633 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:16:23.464436 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:16:23.467301 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:16:23.470428 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:16:23.470927 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:16:23.472055 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:16:23.472694 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:16:23.472726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:16:23.472749 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:16:23.475245 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:16:23.477385 systemd-timesyncd[1558]: Contacted time server 46.224.156.215:123 (0.flatcar.pool.ntp.org). Mar 14 00:16:23.477449 systemd-timesyncd[1558]: Initial clock synchronization to Sat 2026-03-14 00:16:23.595854 UTC. Mar 14 00:16:23.480287 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:16:23.485348 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:16:23.489590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:16:23.502412 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:16:23.509278 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:16:23.518412 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:16:23.521722 jq[1571]: false Mar 14 00:16:23.534266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:16:23.541340 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:16:23.553373 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:16:23.555273 extend-filesystems[1572]: Found loop4 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found loop5 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found loop6 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found loop7 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda1 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda2 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda3 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found usr Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda4 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda6 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda7 Mar 14 00:16:23.564675 extend-filesystems[1572]: Found sda9 Mar 14 00:16:23.564675 extend-filesystems[1572]: Checking size of /dev/sda9 Mar 14 00:16:23.567431 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:16:23.602877 dbus-daemon[1569]: [system] SELinux support is enabled Mar 14 00:16:23.596331 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:16:23.610692 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:16:23.613735 coreos-metadata[1568]: Mar 14 00:16:23.612 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:16:23.628422 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Mar 14 00:16:23.628458 extend-filesystems[1572]: Resized partition /dev/sda9 Mar 14 00:16:23.622305 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:16:23.634693 coreos-metadata[1568]: Mar 14 00:16:23.614 INFO Fetch successful Mar 14 00:16:23.634693 coreos-metadata[1568]: Mar 14 00:16:23.614 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:16:23.634693 coreos-metadata[1568]: Mar 14 00:16:23.616 INFO Fetch successful Mar 14 00:16:23.634796 extend-filesystems[1597]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:16:23.640302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:16:23.641597 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:16:23.660979 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:16:23.661290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:16:23.666847 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:16:23.667123 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:16:23.686275 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:16:23.689479 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:16:23.710166 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1268) Mar 14 00:16:23.710276 jq[1599]: true Mar 14 00:16:23.713501 (ntainerd)[1609]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:16:23.753777 update_engine[1598]: I20260314 00:16:23.753360 1598 main.cc:92] Flatcar Update Engine starting Mar 14 00:16:23.759450 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:16:23.759485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:16:23.763408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:16:23.763437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:16:23.768395 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:16:23.769831 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:16:23.775326 update_engine[1598]: I20260314 00:16:23.775091 1598 update_check_scheduler.cc:74] Next update check in 9m10s Mar 14 00:16:23.782372 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:16:23.793242 tar[1606]: linux-amd64/LICENSE Mar 14 00:16:23.793242 tar[1606]: linux-amd64/helm Mar 14 00:16:23.811170 jq[1617]: true Mar 14 00:16:23.830551 systemd-networkd[1258]: eth0: Gained IPv6LL Mar 14 00:16:23.831092 systemd-networkd[1258]: eth1: Gained IPv6LL Mar 14 00:16:23.851959 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:16:23.855505 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:16:23.866581 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:16:23.876353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:23.888989 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:16:23.897127 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:16:23.899766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:16:23.950038 systemd-logind[1588]: New seat seat0. Mar 14 00:16:23.953875 systemd-logind[1588]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 00:16:23.953897 systemd-logind[1588]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:16:23.954173 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:16:23.997470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:16:24.010916 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:16:24.040228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:16:24.048710 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:16:24.049045 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:16:24.056486 locksmithd[1626]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:16:24.064580 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:16:24.086991 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:16:24.090832 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:16:24.114181 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Mar 14 00:16:24.118563 systemd[1]: Starting sshkeys.service... Mar 14 00:16:24.125405 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:16:24.140655 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:16:24.149647 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:16:24.152660 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:16:24.158238 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:16:24.167624 extend-filesystems[1597]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:16:24.167624 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 14 00:16:24.167624 extend-filesystems[1597]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Mar 14 00:16:24.177464 extend-filesystems[1572]: Resized filesystem in /dev/sda9 Mar 14 00:16:24.177464 extend-filesystems[1572]: Found sr0 Mar 14 00:16:24.184829 containerd[1609]: time="2026-03-14T00:16:24.167143585Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:16:24.168106 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:16:24.180910 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:16:24.184741 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:16:24.205376 coreos-metadata[1700]: Mar 14 00:16:24.205 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:16:24.206402 coreos-metadata[1700]: Mar 14 00:16:24.206 INFO Fetch successful Mar 14 00:16:24.210359 unknown[1700]: wrote ssh authorized keys file for user: core Mar 14 00:16:24.242863 containerd[1609]: time="2026-03-14T00:16:24.242561884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.245454 containerd[1609]: time="2026-03-14T00:16:24.245413206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:24.245665 containerd[1609]: time="2026-03-14T00:16:24.245652140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:16:24.245733 containerd[1609]: time="2026-03-14T00:16:24.245722409Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:16:24.249198 update-ssh-keys[1709]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:16:24.250251 containerd[1609]: time="2026-03-14T00:16:24.249654435Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:16:24.250087 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:16:24.252221 containerd[1609]: time="2026-03-14T00:16:24.252190213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.252425 containerd[1609]: time="2026-03-14T00:16:24.252405333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:24.252556 containerd[1609]: time="2026-03-14T00:16:24.252538740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.257387 containerd[1609]: time="2026-03-14T00:16:24.252846276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:24.257387 containerd[1609]: time="2026-03-14T00:16:24.252860707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.257387 containerd[1609]: time="2026-03-14T00:16:24.252872579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:24.257387 containerd[1609]: time="2026-03-14T00:16:24.252880578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.257387 containerd[1609]: time="2026-03-14T00:16:24.252956600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.258029 systemd[1]: Finished sshkeys.service. Mar 14 00:16:24.265270 containerd[1609]: time="2026-03-14T00:16:24.263292054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:16:24.270278 containerd[1609]: time="2026-03-14T00:16:24.267872499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:16:24.270278 containerd[1609]: time="2026-03-14T00:16:24.267897238Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:16:24.270278 containerd[1609]: time="2026-03-14T00:16:24.268033093Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:16:24.270278 containerd[1609]: time="2026-03-14T00:16:24.268076532Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:16:24.282998 containerd[1609]: time="2026-03-14T00:16:24.282922608Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:16:24.284079 containerd[1609]: time="2026-03-14T00:16:24.284050275Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:16:24.284685 containerd[1609]: time="2026-03-14T00:16:24.284258017Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:16:24.284685 containerd[1609]: time="2026-03-14T00:16:24.284285825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:16:24.284685 containerd[1609]: time="2026-03-14T00:16:24.284305786Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:16:24.284685 containerd[1609]: time="2026-03-14T00:16:24.284571429Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:16:24.285202 containerd[1609]: time="2026-03-14T00:16:24.285182925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:16:24.287007 containerd[1609]: time="2026-03-14T00:16:24.286355200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:16:24.287082 containerd[1609]: time="2026-03-14T00:16:24.287069540Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:16:24.287135 containerd[1609]: time="2026-03-14T00:16:24.287126151Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:16:24.288565 containerd[1609]: time="2026-03-14T00:16:24.288547319Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.288643 containerd[1609]: time="2026-03-14T00:16:24.288633486Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.288678 containerd[1609]: time="2026-03-14T00:16:24.288670440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.288764 containerd[1609]: time="2026-03-14T00:16:24.288747520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.289104 containerd[1609]: time="2026-03-14T00:16:24.288840851Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.289224 containerd[1609]: time="2026-03-14T00:16:24.289206909Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.289325 containerd[1609]: time="2026-03-14T00:16:24.289312721Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.290181 containerd[1609]: time="2026-03-14T00:16:24.290164583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:16:24.290276 containerd[1609]: time="2026-03-14T00:16:24.290265354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290332585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290348329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290370018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290380954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290392550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290418457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290429901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290440095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290452291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290464172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290479092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290508719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290522774Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290547391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.290655 containerd[1609]: time="2026-03-14T00:16:24.290556823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.291008 containerd[1609]: time="2026-03-14T00:16:24.290579863Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:16:24.291008 containerd[1609]: time="2026-03-14T00:16:24.290630954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291086359Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291119595Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291171997Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291185209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291206269Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:16:24.291253 containerd[1609]: time="2026-03-14T00:16:24.291223201Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:16:24.292169 containerd[1609]: time="2026-03-14T00:16:24.291234563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:16:24.292540 containerd[1609]: time="2026-03-14T00:16:24.292496268Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:16:24.293419 containerd[1609]: time="2026-03-14T00:16:24.293398917Z" level=info msg="Connect containerd service" Mar 14 00:16:24.293547 containerd[1609]: time="2026-03-14T00:16:24.293531022Z" level=info msg="using legacy CRI server" Mar 14 00:16:24.293611 containerd[1609]: time="2026-03-14T00:16:24.293600367Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:16:24.294113 containerd[1609]: time="2026-03-14T00:16:24.293796481Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:16:24.295487 containerd[1609]: time="2026-03-14T00:16:24.295459014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296676577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296729052Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296761056Z" level=info msg="Start subscribing containerd event" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296797502Z" level=info msg="Start recovering state" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296857091Z" level=info msg="Start event monitor" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296879187Z" level=info msg="Start snapshots syncer" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296896383Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296912421Z" level=info msg="Start streaming server" Mar 14 00:16:24.297455 containerd[1609]: time="2026-03-14T00:16:24.296984459Z" level=info msg="containerd successfully booted in 0.162218s" Mar 14 00:16:24.297288 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:16:24.600440 tar[1606]: linux-amd64/README.md Mar 14 00:16:24.614202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:16:24.906360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:24.909627 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:16:24.910468 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:24.912687 systemd[1]: Startup finished in 7.705s (kernel) + 4.970s (userspace) = 12.675s. Mar 14 00:16:25.367554 kubelet[1730]: E0314 00:16:25.367485 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:25.370963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:25.371287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:28.289084 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:16:28.296597 systemd[1]: Started sshd@0-204.168.138.0:22-68.220.241.50:57664.service - OpenSSH per-connection server daemon (68.220.241.50:57664). Mar 14 00:16:29.073062 sshd[1742]: Accepted publickey for core from 68.220.241.50 port 57664 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:29.077559 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:29.089070 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:16:29.095693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:16:29.099641 systemd-logind[1588]: New session 1 of user core. Mar 14 00:16:29.112451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:16:29.122995 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:16:29.139908 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:16:29.254106 systemd[1748]: Queued start job for default target default.target. Mar 14 00:16:29.254472 systemd[1748]: Created slice app.slice - User Application Slice. Mar 14 00:16:29.254492 systemd[1748]: Reached target paths.target - Paths. Mar 14 00:16:29.254503 systemd[1748]: Reached target timers.target - Timers. Mar 14 00:16:29.266261 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:16:29.273141 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:16:29.273223 systemd[1748]: Reached target sockets.target - Sockets. Mar 14 00:16:29.273236 systemd[1748]: Reached target basic.target - Basic System. Mar 14 00:16:29.273373 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:16:29.274764 systemd[1748]: Reached target default.target - Main User Target. Mar 14 00:16:29.274810 systemd[1748]: Startup finished in 122ms. Mar 14 00:16:29.278678 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:16:29.811488 systemd[1]: Started sshd@1-204.168.138.0:22-68.220.241.50:57678.service - OpenSSH per-connection server daemon (68.220.241.50:57678). Mar 14 00:16:30.572198 sshd[1760]: Accepted publickey for core from 68.220.241.50 port 57678 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:30.574193 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:30.582969 systemd-logind[1588]: New session 2 of user core. Mar 14 00:16:30.588591 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:16:31.103872 sshd[1760]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:31.110977 systemd-logind[1588]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:16:31.113081 systemd[1]: sshd@1-204.168.138.0:22-68.220.241.50:57678.service: Deactivated successfully. Mar 14 00:16:31.118890 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:16:31.120722 systemd-logind[1588]: Removed session 2. Mar 14 00:16:31.233656 systemd[1]: Started sshd@2-204.168.138.0:22-68.220.241.50:57684.service - OpenSSH per-connection server daemon (68.220.241.50:57684). Mar 14 00:16:31.986233 sshd[1768]: Accepted publickey for core from 68.220.241.50 port 57684 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:31.988959 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:31.996668 systemd-logind[1588]: New session 3 of user core. Mar 14 00:16:32.012739 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:16:32.506987 sshd[1768]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:32.511090 systemd[1]: sshd@2-204.168.138.0:22-68.220.241.50:57684.service: Deactivated successfully. Mar 14 00:16:32.517577 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:16:32.517630 systemd-logind[1588]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:16:32.519470 systemd-logind[1588]: Removed session 3. Mar 14 00:16:32.635736 systemd[1]: Started sshd@3-204.168.138.0:22-68.220.241.50:46590.service - OpenSSH per-connection server daemon (68.220.241.50:46590). Mar 14 00:16:33.385456 sshd[1776]: Accepted publickey for core from 68.220.241.50 port 46590 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:33.388104 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:33.394710 systemd-logind[1588]: New session 4 of user core. Mar 14 00:16:33.404433 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:16:33.913819 sshd[1776]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:33.917759 systemd[1]: sshd@3-204.168.138.0:22-68.220.241.50:46590.service: Deactivated successfully. Mar 14 00:16:33.921983 systemd-logind[1588]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:16:33.922714 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:16:33.924201 systemd-logind[1588]: Removed session 4. Mar 14 00:16:34.040515 systemd[1]: Started sshd@4-204.168.138.0:22-68.220.241.50:46600.service - OpenSSH per-connection server daemon (68.220.241.50:46600). Mar 14 00:16:34.805932 sshd[1784]: Accepted publickey for core from 68.220.241.50 port 46600 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:34.808864 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:34.818297 systemd-logind[1588]: New session 5 of user core. Mar 14 00:16:34.824434 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:16:35.231265 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:16:35.231988 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:35.252904 sudo[1788]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:35.375784 sshd[1784]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:35.384824 systemd[1]: sshd@4-204.168.138.0:22-68.220.241.50:46600.service: Deactivated successfully. Mar 14 00:16:35.390426 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:16:35.392068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:16:35.393063 systemd-logind[1588]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:16:35.403789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:35.405235 systemd-logind[1588]: Removed session 5. Mar 14 00:16:35.508710 systemd[1]: Started sshd@5-204.168.138.0:22-68.220.241.50:46604.service - OpenSSH per-connection server daemon (68.220.241.50:46604). Mar 14 00:16:35.595322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:35.614930 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:35.662936 kubelet[1807]: E0314 00:16:35.662883 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:35.667845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:35.668174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:36.286222 sshd[1797]: Accepted publickey for core from 68.220.241.50 port 46604 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:36.288295 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:36.296246 systemd-logind[1588]: New session 6 of user core. Mar 14 00:16:36.308333 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:16:36.699580 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:16:36.700357 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:36.707426 sudo[1818]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:36.719692 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:16:36.720423 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:36.743559 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:36.753991 auditctl[1821]: No rules Mar 14 00:16:36.756132 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:16:36.756816 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:36.769205 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:16:36.817035 augenrules[1840]: No rules Mar 14 00:16:36.820722 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:16:36.824540 sudo[1817]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:36.945265 sshd[1797]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:36.950611 systemd[1]: sshd@5-204.168.138.0:22-68.220.241.50:46604.service: Deactivated successfully. Mar 14 00:16:36.955970 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:16:36.958620 systemd-logind[1588]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:16:36.959747 systemd-logind[1588]: Removed session 6. Mar 14 00:16:37.074069 systemd[1]: Started sshd@6-204.168.138.0:22-68.220.241.50:46614.service - OpenSSH per-connection server daemon (68.220.241.50:46614). Mar 14 00:16:37.816211 sshd[1849]: Accepted publickey for core from 68.220.241.50 port 46614 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:16:37.818341 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:37.825062 systemd-logind[1588]: New session 7 of user core. Mar 14 00:16:37.827441 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:16:38.221878 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:16:38.222254 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:16:38.558596 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:16:38.560102 (dockerd)[1870]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:16:38.825343 dockerd[1870]: time="2026-03-14T00:16:38.823540415Z" level=info msg="Starting up" Mar 14 00:16:38.904595 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport78650780-merged.mount: Deactivated successfully. Mar 14 00:16:38.968811 dockerd[1870]: time="2026-03-14T00:16:38.968543922Z" level=info msg="Loading containers: start." Mar 14 00:16:39.085442 kernel: Initializing XFRM netlink socket Mar 14 00:16:39.179641 systemd-networkd[1258]: docker0: Link UP Mar 14 00:16:39.200730 dockerd[1870]: time="2026-03-14T00:16:39.200663404Z" level=info msg="Loading containers: done." Mar 14 00:16:39.218474 dockerd[1870]: time="2026-03-14T00:16:39.218413929Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:16:39.218692 dockerd[1870]: time="2026-03-14T00:16:39.218534893Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:16:39.218692 dockerd[1870]: time="2026-03-14T00:16:39.218669402Z" level=info msg="Daemon has completed initialization" Mar 14 00:16:39.256994 dockerd[1870]: time="2026-03-14T00:16:39.256907725Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:16:39.257681 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:16:39.752172 containerd[1609]: time="2026-03-14T00:16:39.751731608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:16:40.381333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959870201.mount: Deactivated successfully. Mar 14 00:16:41.594718 containerd[1609]: time="2026-03-14T00:16:41.594619706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.596175 containerd[1609]: time="2026-03-14T00:16:41.596039964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116286" Mar 14 00:16:41.597087 containerd[1609]: time="2026-03-14T00:16:41.597049109Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.601637 containerd[1609]: time="2026-03-14T00:16:41.600439230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:41.601637 containerd[1609]: time="2026-03-14T00:16:41.601099318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.84930372s" Mar 14 00:16:41.601637 containerd[1609]: time="2026-03-14T00:16:41.601127734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:16:41.602129 containerd[1609]: time="2026-03-14T00:16:41.602053416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:16:42.831779 containerd[1609]: time="2026-03-14T00:16:42.831715879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:42.833013 containerd[1609]: time="2026-03-14T00:16:42.832958362Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021832" Mar 14 00:16:42.833801 containerd[1609]: time="2026-03-14T00:16:42.833759564Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:42.837179 containerd[1609]: time="2026-03-14T00:16:42.836820697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:42.838494 containerd[1609]: time="2026-03-14T00:16:42.837976726Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.235896981s" Mar 14 00:16:42.838494 containerd[1609]: time="2026-03-14T00:16:42.838010882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:16:42.838836 containerd[1609]: time="2026-03-14T00:16:42.838801244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:16:43.954902 containerd[1609]: time="2026-03-14T00:16:43.954820151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.956068 containerd[1609]: time="2026-03-14T00:16:43.956015833Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162768" Mar 14 00:16:43.957167 containerd[1609]: time="2026-03-14T00:16:43.956764752Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.959612 containerd[1609]: time="2026-03-14T00:16:43.959579155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:43.961512 containerd[1609]: time="2026-03-14T00:16:43.960795081Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.121955248s" Mar 14 00:16:43.961512 containerd[1609]: time="2026-03-14T00:16:43.960834104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:16:43.961792 containerd[1609]: time="2026-03-14T00:16:43.961759775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:16:45.005129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860304954.mount: Deactivated successfully. Mar 14 00:16:45.355799 containerd[1609]: time="2026-03-14T00:16:45.355732894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:45.357318 containerd[1609]: time="2026-03-14T00:16:45.357253959Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828675" Mar 14 00:16:45.359242 containerd[1609]: time="2026-03-14T00:16:45.358423419Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:45.361476 containerd[1609]: time="2026-03-14T00:16:45.361419678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:45.362078 containerd[1609]: time="2026-03-14T00:16:45.362050328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.400252585s" Mar 14 00:16:45.362109 containerd[1609]: time="2026-03-14T00:16:45.362081372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:16:45.363314 containerd[1609]: time="2026-03-14T00:16:45.363290840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:16:45.839990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:16:45.851445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:45.863774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041597629.mount: Deactivated successfully. Mar 14 00:16:46.037332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:46.048754 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:46.104910 kubelet[2104]: E0314 00:16:46.104810 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:46.107856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:46.108071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:46.737040 containerd[1609]: time="2026-03-14T00:16:46.736942226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:46.740220 containerd[1609]: time="2026-03-14T00:16:46.740120504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Mar 14 00:16:46.746180 containerd[1609]: time="2026-03-14T00:16:46.745275316Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:46.751397 containerd[1609]: time="2026-03-14T00:16:46.751347944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:46.752818 containerd[1609]: time="2026-03-14T00:16:46.752782821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.389461409s" Mar 14 00:16:46.752945 containerd[1609]: time="2026-03-14T00:16:46.752926048Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:16:46.753773 containerd[1609]: time="2026-03-14T00:16:46.753714890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:16:47.235700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904246766.mount: Deactivated successfully. Mar 14 00:16:47.244948 containerd[1609]: time="2026-03-14T00:16:47.243961907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.244948 containerd[1609]: time="2026-03-14T00:16:47.244903235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Mar 14 00:16:47.245883 containerd[1609]: time="2026-03-14T00:16:47.245857571Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.248283 containerd[1609]: time="2026-03-14T00:16:47.248240500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:47.248980 containerd[1609]: time="2026-03-14T00:16:47.248944949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.051423ms" Mar 14 00:16:47.248980 containerd[1609]: time="2026-03-14T00:16:47.248978894Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:16:47.249652 containerd[1609]: time="2026-03-14T00:16:47.249609131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:16:47.779820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216979919.mount: Deactivated successfully. Mar 14 00:16:48.590539 containerd[1609]: time="2026-03-14T00:16:48.590488889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:48.593170 containerd[1609]: time="2026-03-14T00:16:48.593050929Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718940" Mar 14 00:16:48.594423 containerd[1609]: time="2026-03-14T00:16:48.594393349Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:48.597262 containerd[1609]: time="2026-03-14T00:16:48.597206427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:48.598461 containerd[1609]: time="2026-03-14T00:16:48.598242843Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.348603547s" Mar 14 00:16:48.598461 containerd[1609]: time="2026-03-14T00:16:48.598270170Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:16:51.533271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:51.539325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:51.572743 systemd[1]: Reloading requested from client PID 2250 ('systemctl') (unit session-7.scope)... Mar 14 00:16:51.572891 systemd[1]: Reloading... Mar 14 00:16:51.690237 zram_generator::config[2291]: No configuration found. Mar 14 00:16:51.813769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:51.886020 systemd[1]: Reloading finished in 312 ms. Mar 14 00:16:51.945580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:51.947102 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:51.956816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:51.958552 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:51.958869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:51.970661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:52.158387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:52.168806 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:52.215200 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:52.215200 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:16:52.215200 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:52.215200 kubelet[2364]: I0314 00:16:52.214963 2364 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:16:52.822565 kubelet[2364]: I0314 00:16:52.822495 2364 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:16:52.822565 kubelet[2364]: I0314 00:16:52.822542 2364 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:52.822863 kubelet[2364]: I0314 00:16:52.822841 2364 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:16:52.855176 kubelet[2364]: E0314 00:16:52.855007 2364 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://204.168.138.0:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:52.857662 kubelet[2364]: I0314 00:16:52.857296 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:52.867521 kubelet[2364]: E0314 00:16:52.867442 2364 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:52.867521 kubelet[2364]: I0314 00:16:52.867504 2364 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:52.871890 kubelet[2364]: I0314 00:16:52.871826 2364 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:16:52.873194 kubelet[2364]: I0314 00:16:52.873114 2364 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:52.873402 kubelet[2364]: I0314 00:16:52.873179 2364 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-968d08e397","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:16:52.873402 kubelet[2364]: I0314 00:16:52.873403 2364 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:16:52.873536 kubelet[2364]: I0314 00:16:52.873414 2364 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:16:52.873611 kubelet[2364]: I0314 00:16:52.873588 2364 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:52.878211 kubelet[2364]: I0314 00:16:52.878176 2364 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:16:52.878211 kubelet[2364]: I0314 00:16:52.878205 2364 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:52.878365 kubelet[2364]: I0314 00:16:52.878246 2364 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:16:52.879943 kubelet[2364]: I0314 00:16:52.879728 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:52.884677 kubelet[2364]: I0314 00:16:52.884068 2364 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:52.884826 kubelet[2364]: I0314 00:16:52.884813 2364 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:52.885925 kubelet[2364]: W0314 00:16:52.885910 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:16:52.890075 kubelet[2364]: I0314 00:16:52.890059 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:16:52.890204 kubelet[2364]: I0314 00:16:52.890196 2364 server.go:1289] "Started kubelet" Mar 14 00:16:52.890405 kubelet[2364]: E0314 00:16:52.890387 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.138.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-968d08e397&limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:16:52.890680 kubelet[2364]: E0314 00:16:52.890643 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.138.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:16:52.891658 kubelet[2364]: I0314 00:16:52.891282 2364 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:52.891658 kubelet[2364]: I0314 00:16:52.891607 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:52.892017 kubelet[2364]: I0314 00:16:52.892005 2364 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:52.892234 kubelet[2364]: I0314 00:16:52.892211 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:16:52.895480 kubelet[2364]: I0314 00:16:52.893572 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:52.895480 kubelet[2364]: I0314 00:16:52.895397 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:16:52.896001 kubelet[2364]: I0314 00:16:52.895975 2364 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:52.896566 kubelet[2364]: E0314 00:16:52.896156 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-968d08e397\" not found" Mar 14 00:16:52.897968 kubelet[2364]: E0314 00:16:52.897934 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.138.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-968d08e397?timeout=10s\": dial tcp 204.168.138.0:6443: connect: connection refused" interval="200ms" Mar 14 00:16:52.899080 kubelet[2364]: I0314 00:16:52.899053 2364 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:52.899167 kubelet[2364]: I0314 00:16:52.899132 2364 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:52.901509 kubelet[2364]: I0314 00:16:52.901332 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:16:52.904627 kubelet[2364]: I0314 00:16:52.904605 2364 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:52.913735 kubelet[2364]: I0314 00:16:52.913704 2364 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:16:52.914250 kubelet[2364]: E0314 00:16:52.914041 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.138.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:16:52.916200 kubelet[2364]: E0314 00:16:52.914731 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.138.0:6443/api/v1/namespaces/default/events\": dial tcp 204.168.138.0:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-968d08e397.189c8d06a1277b9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-968d08e397,UID:ci-4081-3-6-n-968d08e397,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-968d08e397,},FirstTimestamp:2026-03-14 00:16:52.890172315 +0000 UTC m=+0.716412102,LastTimestamp:2026-03-14 00:16:52.890172315 +0000 UTC m=+0.716412102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-968d08e397,}" Mar 14 00:16:52.933924 kubelet[2364]: E0314 00:16:52.933848 2364 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:52.933924 kubelet[2364]: I0314 00:16:52.933904 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:52.935091 kubelet[2364]: I0314 00:16:52.935024 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:52.935091 kubelet[2364]: I0314 00:16:52.935038 2364 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:16:52.935091 kubelet[2364]: I0314 00:16:52.935057 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:52.935091 kubelet[2364]: I0314 00:16:52.935064 2364 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:16:52.935231 kubelet[2364]: E0314 00:16:52.935100 2364 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:52.940130 kubelet[2364]: E0314 00:16:52.939463 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.138.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:16:52.945555 kubelet[2364]: I0314 00:16:52.945530 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:16:52.945696 kubelet[2364]: I0314 00:16:52.945683 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:52.945757 kubelet[2364]: I0314 00:16:52.945749 2364 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:52.948423 kubelet[2364]: I0314 00:16:52.948401 2364 policy_none.go:49] "None policy: Start" Mar 14 00:16:52.948543 kubelet[2364]: I0314 00:16:52.948531 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:16:52.948602 kubelet[2364]: I0314 00:16:52.948594 2364 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:16:52.954101 kubelet[2364]: E0314 00:16:52.954070 2364 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:52.954291 kubelet[2364]: I0314 00:16:52.954276 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:16:52.954326 kubelet[2364]: I0314 00:16:52.954291 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:52.955894 kubelet[2364]: I0314 00:16:52.955862 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:16:52.956670 kubelet[2364]: E0314 00:16:52.956640 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:52.956670 kubelet[2364]: E0314 00:16:52.956681 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-968d08e397\" not found" Mar 14 00:16:53.046472 kubelet[2364]: E0314 00:16:53.044798 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.047291 kubelet[2364]: E0314 00:16:53.047263 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.052449 kubelet[2364]: E0314 00:16:53.052204 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.056530 kubelet[2364]: I0314 00:16:53.056484 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.056889 kubelet[2364]: E0314 00:16:53.056853 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.138.0:6443/api/v1/nodes\": dial tcp 204.168.138.0:6443: connect: connection refused" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.099049 kubelet[2364]: E0314 00:16:53.098958 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.138.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-968d08e397?timeout=10s\": dial tcp 204.168.138.0:6443: connect: connection refused" interval="400ms" Mar 14 00:16:53.114770 kubelet[2364]: I0314 00:16:53.114666 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.114770 kubelet[2364]: I0314 00:16:53.114724 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.114770 kubelet[2364]: I0314 00:16:53.114752 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.114770 kubelet[2364]: I0314 00:16:53.114771 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f00d05552da7733546736de8d1d7986-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-968d08e397\" (UID: \"7f00d05552da7733546736de8d1d7986\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.114770 kubelet[2364]: I0314 00:16:53.114791 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.115268 kubelet[2364]: I0314 00:16:53.114809 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.115268 kubelet[2364]: I0314 00:16:53.114826 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.115268 kubelet[2364]: I0314 00:16:53.114841 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.115268 kubelet[2364]: I0314 00:16:53.114861 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.260031 kubelet[2364]: I0314 00:16:53.259980 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.260729 kubelet[2364]: E0314 00:16:53.260394 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.138.0:6443/api/v1/nodes\": dial tcp 204.168.138.0:6443: connect: connection refused" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.346633 containerd[1609]: time="2026-03-14T00:16:53.346569661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-968d08e397,Uid:7f00d05552da7733546736de8d1d7986,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:53.348021 containerd[1609]: time="2026-03-14T00:16:53.348003563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-968d08e397,Uid:21bb3c4b8581028b0cbca34028c3f070,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:53.354823 containerd[1609]: time="2026-03-14T00:16:53.354261206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-968d08e397,Uid:1f478f82dd90d2e9887486ed4acb60ec,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:53.431089 kubelet[2364]: E0314 00:16:53.430958 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.138.0:6443/api/v1/namespaces/default/events\": dial tcp 204.168.138.0:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-968d08e397.189c8d06a1277b9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-968d08e397,UID:ci-4081-3-6-n-968d08e397,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-968d08e397,},FirstTimestamp:2026-03-14 00:16:52.890172315 +0000 UTC m=+0.716412102,LastTimestamp:2026-03-14 00:16:52.890172315 +0000 UTC m=+0.716412102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-968d08e397,}" Mar 14 00:16:53.500238 kubelet[2364]: E0314 00:16:53.500108 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.138.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-968d08e397?timeout=10s\": dial tcp 204.168.138.0:6443: connect: connection refused" interval="800ms" Mar 14 00:16:53.663645 kubelet[2364]: I0314 00:16:53.663523 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.664128 kubelet[2364]: E0314 00:16:53.664061 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.138.0:6443/api/v1/nodes\": dial tcp 204.168.138.0:6443: connect: connection refused" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:53.818013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835112929.mount: Deactivated successfully. Mar 14 00:16:53.825819 containerd[1609]: time="2026-03-14T00:16:53.825768997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.826659 containerd[1609]: time="2026-03-14T00:16:53.826620147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.827549 containerd[1609]: time="2026-03-14T00:16:53.827497272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 14 00:16:53.828216 containerd[1609]: time="2026-03-14T00:16:53.828176671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:53.828888 containerd[1609]: time="2026-03-14T00:16:53.828845169Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.829866 containerd[1609]: time="2026-03-14T00:16:53.829683375Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.830780 containerd[1609]: time="2026-03-14T00:16:53.830732461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:53.832575 containerd[1609]: time="2026-03-14T00:16:53.832499726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:53.834512 containerd[1609]: time="2026-03-14T00:16:53.834464739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.112748ms" Mar 14 00:16:53.835829 containerd[1609]: time="2026-03-14T00:16:53.835767303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.094597ms" Mar 14 00:16:53.841764 containerd[1609]: time="2026-03-14T00:16:53.841677749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.53854ms" Mar 14 00:16:53.935786 containerd[1609]: time="2026-03-14T00:16:53.934857077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.935786 containerd[1609]: time="2026-03-14T00:16:53.934926893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.935786 containerd[1609]: time="2026-03-14T00:16:53.934935899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.935786 containerd[1609]: time="2026-03-14T00:16:53.935058831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.937879 containerd[1609]: time="2026-03-14T00:16:53.936962443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.937879 containerd[1609]: time="2026-03-14T00:16:53.937027150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.937879 containerd[1609]: time="2026-03-14T00:16:53.937038610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.937879 containerd[1609]: time="2026-03-14T00:16:53.937127151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.942510 containerd[1609]: time="2026-03-14T00:16:53.941099593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:53.942510 containerd[1609]: time="2026-03-14T00:16:53.941153700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:53.942510 containerd[1609]: time="2026-03-14T00:16:53.941180860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:53.942510 containerd[1609]: time="2026-03-14T00:16:53.941260503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:54.020766 containerd[1609]: time="2026-03-14T00:16:54.020700509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-968d08e397,Uid:21bb3c4b8581028b0cbca34028c3f070,Namespace:kube-system,Attempt:0,} returns sandbox id \"f72689b3c310497c2143b972fb49755702078a3d28ca557dd6c4d636f015ad5d\"" Mar 14 00:16:54.029008 containerd[1609]: time="2026-03-14T00:16:54.028968980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-968d08e397,Uid:1f478f82dd90d2e9887486ed4acb60ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba0350828ab428cc01504b3385b2127c3708f5213886128cb9b2fd71cdd41c35\"" Mar 14 00:16:54.032325 containerd[1609]: time="2026-03-14T00:16:54.032121908Z" level=info msg="CreateContainer within sandbox \"f72689b3c310497c2143b972fb49755702078a3d28ca557dd6c4d636f015ad5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:16:54.035677 containerd[1609]: time="2026-03-14T00:16:54.035432857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-968d08e397,Uid:7f00d05552da7733546736de8d1d7986,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd63024fde4cdd7c3377305161cd828f284b73d14e62d551b18dbf75b9be4e0a\"" Mar 14 00:16:54.039639 containerd[1609]: time="2026-03-14T00:16:54.039599710Z" level=info msg="CreateContainer within sandbox \"ba0350828ab428cc01504b3385b2127c3708f5213886128cb9b2fd71cdd41c35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:16:54.042626 containerd[1609]: time="2026-03-14T00:16:54.042341681Z" level=info msg="CreateContainer within sandbox \"fd63024fde4cdd7c3377305161cd828f284b73d14e62d551b18dbf75b9be4e0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:16:54.051206 containerd[1609]: time="2026-03-14T00:16:54.051050802Z" level=info msg="CreateContainer within sandbox \"f72689b3c310497c2143b972fb49755702078a3d28ca557dd6c4d636f015ad5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2429e7afbe4578f10f76faa372303414e79c17a94c3a498e4ae9d46c9b26d684\"" Mar 14 00:16:54.052446 containerd[1609]: time="2026-03-14T00:16:54.052416878Z" level=info msg="StartContainer for \"2429e7afbe4578f10f76faa372303414e79c17a94c3a498e4ae9d46c9b26d684\"" Mar 14 00:16:54.054896 containerd[1609]: time="2026-03-14T00:16:54.054851346Z" level=info msg="CreateContainer within sandbox \"ba0350828ab428cc01504b3385b2127c3708f5213886128cb9b2fd71cdd41c35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26\"" Mar 14 00:16:54.056616 containerd[1609]: time="2026-03-14T00:16:54.056474488Z" level=info msg="StartContainer for \"5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26\"" Mar 14 00:16:54.062264 containerd[1609]: time="2026-03-14T00:16:54.062184432Z" level=info msg="CreateContainer within sandbox \"fd63024fde4cdd7c3377305161cd828f284b73d14e62d551b18dbf75b9be4e0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8\"" Mar 14 00:16:54.062903 containerd[1609]: time="2026-03-14T00:16:54.062870766Z" level=info msg="StartContainer for \"8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8\"" Mar 14 00:16:54.085445 kubelet[2364]: E0314 00:16:54.085389 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.138.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:16:54.149173 kubelet[2364]: E0314 00:16:54.149067 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.138.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:16:54.189041 containerd[1609]: time="2026-03-14T00:16:54.186340104Z" level=info msg="StartContainer for \"5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26\" returns successfully" Mar 14 00:16:54.189041 containerd[1609]: time="2026-03-14T00:16:54.186690774Z" level=info msg="StartContainer for \"2429e7afbe4578f10f76faa372303414e79c17a94c3a498e4ae9d46c9b26d684\" returns successfully" Mar 14 00:16:54.222018 kubelet[2364]: E0314 00:16:54.219511 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.138.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.138.0:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:16:54.244448 containerd[1609]: time="2026-03-14T00:16:54.244394215Z" level=info msg="StartContainer for \"8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8\" returns successfully" Mar 14 00:16:54.468905 kubelet[2364]: I0314 00:16:54.468332 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:54.954247 kubelet[2364]: E0314 00:16:54.954201 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:54.955179 kubelet[2364]: E0314 00:16:54.954559 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:54.961403 kubelet[2364]: E0314 00:16:54.961359 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.698491 kubelet[2364]: E0314 00:16:55.698435 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-968d08e397\" not found" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.832496 kubelet[2364]: I0314 00:16:55.832417 2364 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.832496 kubelet[2364]: E0314 00:16:55.832496 2364 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-968d08e397\": node \"ci-4081-3-6-n-968d08e397\" not found" Mar 14 00:16:55.891869 kubelet[2364]: I0314 00:16:55.891794 2364 apiserver.go:52] "Watching apiserver" Mar 14 00:16:55.896960 kubelet[2364]: I0314 00:16:55.896911 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.903021 kubelet[2364]: I0314 00:16:55.901701 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:16:55.903686 kubelet[2364]: E0314 00:16:55.903485 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-968d08e397\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.903686 kubelet[2364]: I0314 00:16:55.903512 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.905709 kubelet[2364]: E0314 00:16:55.905183 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-968d08e397\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.905709 kubelet[2364]: I0314 00:16:55.905218 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.907196 kubelet[2364]: E0314 00:16:55.907164 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.962005 kubelet[2364]: I0314 00:16:55.959618 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.962005 kubelet[2364]: I0314 00:16:55.960722 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.962186 kubelet[2364]: E0314 00:16:55.962110 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-968d08e397\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:55.964723 kubelet[2364]: E0314 00:16:55.964687 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-968d08e397\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:57.623690 systemd[1]: Reloading requested from client PID 2651 ('systemctl') (unit session-7.scope)... Mar 14 00:16:57.623714 systemd[1]: Reloading... Mar 14 00:16:57.707229 zram_generator::config[2691]: No configuration found. Mar 14 00:16:57.825576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:57.901268 systemd[1]: Reloading finished in 276 ms. Mar 14 00:16:57.947253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:57.968852 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:57.969253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:57.979967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:58.134442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:58.147715 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:58.192983 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:58.192983 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:16:58.192983 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:58.192983 kubelet[2752]: I0314 00:16:58.192938 2752 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:16:58.198705 kubelet[2752]: I0314 00:16:58.198674 2752 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:16:58.198705 kubelet[2752]: I0314 00:16:58.198694 2752 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:58.198882 kubelet[2752]: I0314 00:16:58.198861 2752 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:16:58.199844 kubelet[2752]: I0314 00:16:58.199826 2752 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:16:58.204617 kubelet[2752]: I0314 00:16:58.204274 2752 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:58.208102 kubelet[2752]: E0314 00:16:58.208077 2752 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:58.208256 kubelet[2752]: I0314 00:16:58.208246 2752 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:58.213733 kubelet[2752]: I0314 00:16:58.213713 2752 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:16:58.214918 kubelet[2752]: I0314 00:16:58.214433 2752 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:58.214918 kubelet[2752]: I0314 00:16:58.214458 2752 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-968d08e397","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:16:58.214918 kubelet[2752]: I0314 00:16:58.214586 2752 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:16:58.214918 kubelet[2752]: I0314 00:16:58.214594 2752 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:16:58.214918 kubelet[2752]: I0314 00:16:58.214639 2752 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:58.215205 kubelet[2752]: I0314 00:16:58.215194 2752 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:16:58.215263 kubelet[2752]: I0314 00:16:58.215256 2752 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:58.215317 kubelet[2752]: I0314 00:16:58.215310 2752 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:16:58.215360 kubelet[2752]: I0314 00:16:58.215353 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:58.220171 kubelet[2752]: I0314 00:16:58.219765 2752 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:58.220869 kubelet[2752]: I0314 00:16:58.220356 2752 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:58.224175 kubelet[2752]: I0314 00:16:58.224070 2752 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:16:58.224175 kubelet[2752]: I0314 00:16:58.224102 2752 server.go:1289] "Started kubelet" Mar 14 00:16:58.227961 kubelet[2752]: I0314 00:16:58.227525 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:58.228786 kubelet[2752]: I0314 00:16:58.228771 2752 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:58.230288 kubelet[2752]: I0314 00:16:58.228911 2752 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:58.230288 kubelet[2752]: I0314 00:16:58.228982 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:16:58.230288 kubelet[2752]: I0314 00:16:58.229700 2752 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:58.236833 kubelet[2752]: I0314 00:16:58.236793 2752 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:58.237688 kubelet[2752]: E0314 00:16:58.237668 2752 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:58.237924 kubelet[2752]: I0314 00:16:58.237907 2752 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:58.242787 kubelet[2752]: I0314 00:16:58.241419 2752 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:16:58.243729 kubelet[2752]: I0314 00:16:58.243713 2752 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:16:58.243948 kubelet[2752]: I0314 00:16:58.243939 2752 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:16:58.247273 kubelet[2752]: I0314 00:16:58.247249 2752 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:58.247363 kubelet[2752]: I0314 00:16:58.247343 2752 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:58.249363 kubelet[2752]: I0314 00:16:58.249068 2752 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:58.249363 kubelet[2752]: I0314 00:16:58.249091 2752 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:16:58.249363 kubelet[2752]: I0314 00:16:58.249111 2752 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:58.249363 kubelet[2752]: I0314 00:16:58.249118 2752 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:16:58.249363 kubelet[2752]: E0314 00:16:58.249178 2752 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:58.253806 kubelet[2752]: I0314 00:16:58.253788 2752 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:58.311434 kubelet[2752]: I0314 00:16:58.311410 2752 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311601 2752 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311620 2752 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311800 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311808 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311824 2752 policy_none.go:49] "None policy: Start" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311835 2752 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311844 2752 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:16:58.311969 kubelet[2752]: I0314 00:16:58.311912 2752 state_mem.go:75] "Updated machine memory state" Mar 14 00:16:58.313613 kubelet[2752]: E0314 00:16:58.313596 2752 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:58.314602 kubelet[2752]: I0314 00:16:58.313836 2752 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:16:58.314602 kubelet[2752]: I0314 00:16:58.313848 2752 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:58.315009 kubelet[2752]: I0314 00:16:58.314994 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:16:58.315766 kubelet[2752]: E0314 00:16:58.315753 2752 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:58.350057 kubelet[2752]: I0314 00:16:58.349986 2752 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.350338 kubelet[2752]: I0314 00:16:58.350324 2752 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.350415 kubelet[2752]: I0314 00:16:58.350001 2752 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.425554 kubelet[2752]: I0314 00:16:58.425509 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.434943 kubelet[2752]: I0314 00:16:58.434664 2752 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.434943 kubelet[2752]: I0314 00:16:58.434758 2752 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446244 kubelet[2752]: I0314 00:16:58.444830 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446244 kubelet[2752]: I0314 00:16:58.444867 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446244 kubelet[2752]: I0314 00:16:58.444890 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446244 kubelet[2752]: I0314 00:16:58.444914 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446244 kubelet[2752]: I0314 00:16:58.444928 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f00d05552da7733546736de8d1d7986-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-968d08e397\" (UID: \"7f00d05552da7733546736de8d1d7986\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446561 kubelet[2752]: I0314 00:16:58.444944 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21bb3c4b8581028b0cbca34028c3f070-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-968d08e397\" (UID: \"21bb3c4b8581028b0cbca34028c3f070\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446561 kubelet[2752]: I0314 00:16:58.444957 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446561 kubelet[2752]: I0314 00:16:58.444996 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.446561 kubelet[2752]: I0314 00:16:58.445013 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f478f82dd90d2e9887486ed4acb60ec-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" (UID: \"1f478f82dd90d2e9887486ed4acb60ec\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:58.634664 sudo[2786]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:16:58.635339 sudo[2786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:16:59.095069 sudo[2786]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:59.220042 kubelet[2752]: I0314 00:16:59.219937 2752 apiserver.go:52] "Watching apiserver" Mar 14 00:16:59.244564 kubelet[2752]: I0314 00:16:59.244478 2752 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:16:59.283099 kubelet[2752]: I0314 00:16:59.282267 2752 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:59.286842 kubelet[2752]: I0314 00:16:59.286669 2752 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:59.294788 kubelet[2752]: E0314 00:16:59.294627 2752 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-968d08e397\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" Mar 14 00:16:59.294916 kubelet[2752]: E0314 00:16:59.294872 2752 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-968d08e397\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" Mar 14 00:16:59.358190 kubelet[2752]: I0314 00:16:59.358037 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" podStartSLOduration=1.358018748 podStartE2EDuration="1.358018748s" podCreationTimestamp="2026-03-14 00:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:59.339539673 +0000 UTC m=+1.183063371" watchObservedRunningTime="2026-03-14 00:16:59.358018748 +0000 UTC m=+1.201542446" Mar 14 00:16:59.366713 kubelet[2752]: I0314 00:16:59.366387 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-968d08e397" podStartSLOduration=1.366373936 podStartE2EDuration="1.366373936s" podCreationTimestamp="2026-03-14 00:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:59.359272017 +0000 UTC m=+1.202795715" watchObservedRunningTime="2026-03-14 00:16:59.366373936 +0000 UTC m=+1.209897624" Mar 14 00:16:59.374092 kubelet[2752]: I0314 00:16:59.374043 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-968d08e397" podStartSLOduration=1.374027286 podStartE2EDuration="1.374027286s" podCreationTimestamp="2026-03-14 00:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:59.366595692 +0000 UTC m=+1.210119390" watchObservedRunningTime="2026-03-14 00:16:59.374027286 +0000 UTC m=+1.217550984" Mar 14 00:17:00.259954 sudo[1853]: pam_unix(sudo:session): session closed for user root Mar 14 00:17:00.380570 sshd[1849]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:00.388638 systemd[1]: sshd@6-204.168.138.0:22-68.220.241.50:46614.service: Deactivated successfully. Mar 14 00:17:00.395088 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:17:00.396699 systemd-logind[1588]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:17:00.398255 systemd-logind[1588]: Removed session 7. Mar 14 00:17:04.158201 kubelet[2752]: I0314 00:17:04.158129 2752 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:17:04.158786 containerd[1609]: time="2026-03-14T00:17:04.158550651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:17:04.159550 kubelet[2752]: I0314 00:17:04.159279 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:17:05.289915 kubelet[2752]: I0314 00:17:05.289819 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktx6\" (UniqueName: \"kubernetes.io/projected/9944adb1-d9cb-4433-a7f8-74b7530fb5f5-kube-api-access-2ktx6\") pod \"kube-proxy-md84d\" (UID: \"9944adb1-d9cb-4433-a7f8-74b7530fb5f5\") " pod="kube-system/kube-proxy-md84d" Mar 14 00:17:05.289915 kubelet[2752]: I0314 00:17:05.289867 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-cgroup\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.289915 kubelet[2752]: I0314 00:17:05.289889 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-config-path\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.289915 kubelet[2752]: I0314 00:17:05.289943 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9944adb1-d9cb-4433-a7f8-74b7530fb5f5-kube-proxy\") pod \"kube-proxy-md84d\" (UID: \"9944adb1-d9cb-4433-a7f8-74b7530fb5f5\") " pod="kube-system/kube-proxy-md84d" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290518 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-bpf-maps\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290535 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hostproc\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290560 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cni-path\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290584 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-net\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290598 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7dp\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-kube-api-access-tk7dp\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291311 kubelet[2752]: I0314 00:17:05.290616 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9944adb1-d9cb-4433-a7f8-74b7530fb5f5-lib-modules\") pod \"kube-proxy-md84d\" (UID: \"9944adb1-d9cb-4433-a7f8-74b7530fb5f5\") " pod="kube-system/kube-proxy-md84d" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290630 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-etc-cni-netd\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290643 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-lib-modules\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290662 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-xtables-lock\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290684 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-clustermesh-secrets\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290695 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-kernel\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291471 kubelet[2752]: I0314 00:17:05.290708 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hubble-tls\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291625 kubelet[2752]: I0314 00:17:05.290719 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-run\") pod \"cilium-2vgsh\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " pod="kube-system/cilium-2vgsh" Mar 14 00:17:05.291625 kubelet[2752]: I0314 00:17:05.290852 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9944adb1-d9cb-4433-a7f8-74b7530fb5f5-xtables-lock\") pod \"kube-proxy-md84d\" (UID: \"9944adb1-d9cb-4433-a7f8-74b7530fb5f5\") " pod="kube-system/kube-proxy-md84d" Mar 14 00:17:05.492748 kubelet[2752]: I0314 00:17:05.491824 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4kw\" (UniqueName: \"kubernetes.io/projected/fa8a6828-d767-457b-86c0-fcf8be904672-kube-api-access-4j4kw\") pod \"cilium-operator-6c4d7847fc-jc42m\" (UID: \"fa8a6828-d767-457b-86c0-fcf8be904672\") " pod="kube-system/cilium-operator-6c4d7847fc-jc42m" Mar 14 00:17:05.492748 kubelet[2752]: I0314 00:17:05.491867 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa8a6828-d767-457b-86c0-fcf8be904672-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jc42m\" (UID: \"fa8a6828-d767-457b-86c0-fcf8be904672\") " pod="kube-system/cilium-operator-6c4d7847fc-jc42m" Mar 14 00:17:05.531531 containerd[1609]: time="2026-03-14T00:17:05.531477337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-md84d,Uid:9944adb1-d9cb-4433-a7f8-74b7530fb5f5,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:05.541629 containerd[1609]: time="2026-03-14T00:17:05.541091279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vgsh,Uid:40b1fc7f-6ce7-4e9b-8565-b5010ea67da5,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:05.560418 containerd[1609]: time="2026-03-14T00:17:05.560223721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:05.560418 containerd[1609]: time="2026-03-14T00:17:05.560298321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:05.560418 containerd[1609]: time="2026-03-14T00:17:05.560334719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.561080 containerd[1609]: time="2026-03-14T00:17:05.560482297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.589777 containerd[1609]: time="2026-03-14T00:17:05.589528523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:05.589777 containerd[1609]: time="2026-03-14T00:17:05.589608572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:05.589777 containerd[1609]: time="2026-03-14T00:17:05.589623637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.589777 containerd[1609]: time="2026-03-14T00:17:05.589722145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.632575 containerd[1609]: time="2026-03-14T00:17:05.632394057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-md84d,Uid:9944adb1-d9cb-4433-a7f8-74b7530fb5f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd09b748332549474c8602c1a14ad74fc8dafacd7dad918709382738fab38bd\"" Mar 14 00:17:05.641837 containerd[1609]: time="2026-03-14T00:17:05.641494153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vgsh,Uid:40b1fc7f-6ce7-4e9b-8565-b5010ea67da5,Namespace:kube-system,Attempt:0,} returns sandbox id \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\"" Mar 14 00:17:05.645735 containerd[1609]: time="2026-03-14T00:17:05.645706358Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:17:05.655541 containerd[1609]: time="2026-03-14T00:17:05.655507072Z" level=info msg="CreateContainer within sandbox \"bfd09b748332549474c8602c1a14ad74fc8dafacd7dad918709382738fab38bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:17:05.668639 containerd[1609]: time="2026-03-14T00:17:05.668506839Z" level=info msg="CreateContainer within sandbox \"bfd09b748332549474c8602c1a14ad74fc8dafacd7dad918709382738fab38bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85f6a73fe5f2e7fef3e49286941b601e28d4efada77e9680003f9e9d12d97a8d\"" Mar 14 00:17:05.670178 containerd[1609]: time="2026-03-14T00:17:05.669326908Z" level=info msg="StartContainer for \"85f6a73fe5f2e7fef3e49286941b601e28d4efada77e9680003f9e9d12d97a8d\"" Mar 14 00:17:05.741923 containerd[1609]: time="2026-03-14T00:17:05.741856351Z" level=info msg="StartContainer for \"85f6a73fe5f2e7fef3e49286941b601e28d4efada77e9680003f9e9d12d97a8d\" returns successfully" Mar 14 00:17:05.768939 containerd[1609]: time="2026-03-14T00:17:05.768688207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jc42m,Uid:fa8a6828-d767-457b-86c0-fcf8be904672,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:05.810655 containerd[1609]: time="2026-03-14T00:17:05.810457367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:05.811700 containerd[1609]: time="2026-03-14T00:17:05.811445457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:05.811700 containerd[1609]: time="2026-03-14T00:17:05.811517422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.812259 containerd[1609]: time="2026-03-14T00:17:05.811953494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:05.886203 containerd[1609]: time="2026-03-14T00:17:05.886013218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jc42m,Uid:fa8a6828-d767-457b-86c0-fcf8be904672,Namespace:kube-system,Attempt:0,} returns sandbox id \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\"" Mar 14 00:17:06.312523 kubelet[2752]: I0314 00:17:06.312380 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-md84d" podStartSLOduration=1.312356271 podStartE2EDuration="1.312356271s" podCreationTimestamp="2026-03-14 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:06.312059676 +0000 UTC m=+8.155583374" watchObservedRunningTime="2026-03-14 00:17:06.312356271 +0000 UTC m=+8.155879969" Mar 14 00:17:09.244307 update_engine[1598]: I20260314 00:17:09.244217 1598 update_attempter.cc:509] Updating boot flags... Mar 14 00:17:09.332110 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3136) Mar 14 00:17:09.433263 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3138) Mar 14 00:17:10.141923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902060615.mount: Deactivated successfully. Mar 14 00:17:11.628285 containerd[1609]: time="2026-03-14T00:17:11.628219227Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:11.629521 containerd[1609]: time="2026-03-14T00:17:11.629343644Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:17:11.630735 containerd[1609]: time="2026-03-14T00:17:11.630696230Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:11.633174 containerd[1609]: time="2026-03-14T00:17:11.632223082Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.986302951s" Mar 14 00:17:11.633174 containerd[1609]: time="2026-03-14T00:17:11.632259240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:17:11.634012 containerd[1609]: time="2026-03-14T00:17:11.633864165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:17:11.636966 containerd[1609]: time="2026-03-14T00:17:11.636921215Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:17:11.652020 containerd[1609]: time="2026-03-14T00:17:11.651963591Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\"" Mar 14 00:17:11.654637 containerd[1609]: time="2026-03-14T00:17:11.654131987Z" level=info msg="StartContainer for \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\"" Mar 14 00:17:11.728872 containerd[1609]: time="2026-03-14T00:17:11.728812587Z" level=info msg="StartContainer for \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\" returns successfully" Mar 14 00:17:11.798359 containerd[1609]: time="2026-03-14T00:17:11.798270044Z" level=info msg="shim disconnected" id=cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30 namespace=k8s.io Mar 14 00:17:11.798359 containerd[1609]: time="2026-03-14T00:17:11.798347827Z" level=warning msg="cleaning up after shim disconnected" id=cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30 namespace=k8s.io Mar 14 00:17:11.798359 containerd[1609]: time="2026-03-14T00:17:11.798359436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:11.814562 containerd[1609]: time="2026-03-14T00:17:11.814464801Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:12.324432 containerd[1609]: time="2026-03-14T00:17:12.324315178Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:17:12.342116 containerd[1609]: time="2026-03-14T00:17:12.341596868Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\"" Mar 14 00:17:12.346658 containerd[1609]: time="2026-03-14T00:17:12.346614876Z" level=info msg="StartContainer for \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\"" Mar 14 00:17:12.414693 containerd[1609]: time="2026-03-14T00:17:12.414654879Z" level=info msg="StartContainer for \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\" returns successfully" Mar 14 00:17:12.426492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:17:12.427090 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:17:12.427189 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:17:12.438051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:17:12.454332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:17:12.463925 containerd[1609]: time="2026-03-14T00:17:12.463870331Z" level=info msg="shim disconnected" id=fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f namespace=k8s.io Mar 14 00:17:12.463925 containerd[1609]: time="2026-03-14T00:17:12.463920250Z" level=warning msg="cleaning up after shim disconnected" id=fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f namespace=k8s.io Mar 14 00:17:12.464163 containerd[1609]: time="2026-03-14T00:17:12.463928101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:12.649561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30-rootfs.mount: Deactivated successfully. Mar 14 00:17:13.215453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561791307.mount: Deactivated successfully. Mar 14 00:17:13.328847 containerd[1609]: time="2026-03-14T00:17:13.328609819Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:17:13.363893 containerd[1609]: time="2026-03-14T00:17:13.363704540Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\"" Mar 14 00:17:13.364925 containerd[1609]: time="2026-03-14T00:17:13.364891416Z" level=info msg="StartContainer for \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\"" Mar 14 00:17:13.427920 containerd[1609]: time="2026-03-14T00:17:13.427716338Z" level=info msg="StartContainer for \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\" returns successfully" Mar 14 00:17:13.477487 containerd[1609]: time="2026-03-14T00:17:13.477219448Z" level=info msg="shim disconnected" id=29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151 namespace=k8s.io Mar 14 00:17:13.477487 containerd[1609]: time="2026-03-14T00:17:13.477268365Z" level=warning msg="cleaning up after shim disconnected" id=29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151 namespace=k8s.io Mar 14 00:17:13.477487 containerd[1609]: time="2026-03-14T00:17:13.477276138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:13.500167 containerd[1609]: time="2026-03-14T00:17:13.499639027Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:13.725390 containerd[1609]: time="2026-03-14T00:17:13.725307333Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:13.726272 containerd[1609]: time="2026-03-14T00:17:13.726241362Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:17:13.727081 containerd[1609]: time="2026-03-14T00:17:13.727048931Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:13.728260 containerd[1609]: time="2026-03-14T00:17:13.728133547Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.094245384s" Mar 14 00:17:13.728260 containerd[1609]: time="2026-03-14T00:17:13.728174211Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:17:13.732092 containerd[1609]: time="2026-03-14T00:17:13.731995430Z" level=info msg="CreateContainer within sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:17:13.739911 containerd[1609]: time="2026-03-14T00:17:13.739883756Z" level=info msg="CreateContainer within sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\"" Mar 14 00:17:13.741709 containerd[1609]: time="2026-03-14T00:17:13.740726280Z" level=info msg="StartContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\"" Mar 14 00:17:13.795623 containerd[1609]: time="2026-03-14T00:17:13.795558843Z" level=info msg="StartContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" returns successfully" Mar 14 00:17:14.338561 containerd[1609]: time="2026-03-14T00:17:14.338507303Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:17:14.360561 kubelet[2752]: I0314 00:17:14.360482 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jc42m" podStartSLOduration=1.519598518 podStartE2EDuration="9.360464862s" podCreationTimestamp="2026-03-14 00:17:05 +0000 UTC" firstStartedPulling="2026-03-14 00:17:05.888063916 +0000 UTC m=+7.731587614" lastFinishedPulling="2026-03-14 00:17:13.72893026 +0000 UTC m=+15.572453958" observedRunningTime="2026-03-14 00:17:14.356061562 +0000 UTC m=+16.199585290" watchObservedRunningTime="2026-03-14 00:17:14.360464862 +0000 UTC m=+16.203988580" Mar 14 00:17:14.366548 containerd[1609]: time="2026-03-14T00:17:14.366376110Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\"" Mar 14 00:17:14.367458 containerd[1609]: time="2026-03-14T00:17:14.367414148Z" level=info msg="StartContainer for \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\"" Mar 14 00:17:14.513166 containerd[1609]: time="2026-03-14T00:17:14.511704970Z" level=info msg="StartContainer for \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\" returns successfully" Mar 14 00:17:14.564313 containerd[1609]: time="2026-03-14T00:17:14.564236289Z" level=info msg="shim disconnected" id=b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80 namespace=k8s.io Mar 14 00:17:14.564640 containerd[1609]: time="2026-03-14T00:17:14.564568200Z" level=warning msg="cleaning up after shim disconnected" id=b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80 namespace=k8s.io Mar 14 00:17:14.564640 containerd[1609]: time="2026-03-14T00:17:14.564580990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:15.347543 containerd[1609]: time="2026-03-14T00:17:15.347491032Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:17:15.369353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576820089.mount: Deactivated successfully. Mar 14 00:17:15.371593 containerd[1609]: time="2026-03-14T00:17:15.371416977Z" level=info msg="CreateContainer within sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\"" Mar 14 00:17:15.374760 containerd[1609]: time="2026-03-14T00:17:15.374686698Z" level=info msg="StartContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\"" Mar 14 00:17:15.440122 containerd[1609]: time="2026-03-14T00:17:15.440062612Z" level=info msg="StartContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" returns successfully" Mar 14 00:17:15.574272 kubelet[2752]: I0314 00:17:15.573793 2752 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:17:15.648365 systemd[1]: run-containerd-runc-k8s.io-21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465-runc.gDIsG5.mount: Deactivated successfully. Mar 14 00:17:15.668831 kubelet[2752]: I0314 00:17:15.668765 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ed978f1-1f9f-4125-ad92-8ba8bd69a012-config-volume\") pod \"coredns-674b8bbfcf-4jcp5\" (UID: \"8ed978f1-1f9f-4125-ad92-8ba8bd69a012\") " pod="kube-system/coredns-674b8bbfcf-4jcp5" Mar 14 00:17:15.668831 kubelet[2752]: I0314 00:17:15.668823 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7243cb5-8130-484c-a000-7edd494c67f9-config-volume\") pod \"coredns-674b8bbfcf-tqrgb\" (UID: \"a7243cb5-8130-484c-a000-7edd494c67f9\") " pod="kube-system/coredns-674b8bbfcf-tqrgb" Mar 14 00:17:15.669050 kubelet[2752]: I0314 00:17:15.668847 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5djn\" (UniqueName: \"kubernetes.io/projected/a7243cb5-8130-484c-a000-7edd494c67f9-kube-api-access-x5djn\") pod \"coredns-674b8bbfcf-tqrgb\" (UID: \"a7243cb5-8130-484c-a000-7edd494c67f9\") " pod="kube-system/coredns-674b8bbfcf-tqrgb" Mar 14 00:17:15.669050 kubelet[2752]: I0314 00:17:15.668871 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mb25\" (UniqueName: \"kubernetes.io/projected/8ed978f1-1f9f-4125-ad92-8ba8bd69a012-kube-api-access-9mb25\") pod \"coredns-674b8bbfcf-4jcp5\" (UID: \"8ed978f1-1f9f-4125-ad92-8ba8bd69a012\") " pod="kube-system/coredns-674b8bbfcf-4jcp5" Mar 14 00:17:15.916714 containerd[1609]: time="2026-03-14T00:17:15.916622131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tqrgb,Uid:a7243cb5-8130-484c-a000-7edd494c67f9,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:15.920255 containerd[1609]: time="2026-03-14T00:17:15.919789592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4jcp5,Uid:8ed978f1-1f9f-4125-ad92-8ba8bd69a012,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:17.681854 systemd-networkd[1258]: cilium_host: Link UP Mar 14 00:17:17.682055 systemd-networkd[1258]: cilium_net: Link UP Mar 14 00:17:17.685965 systemd-networkd[1258]: cilium_net: Gained carrier Mar 14 00:17:17.686418 systemd-networkd[1258]: cilium_host: Gained carrier Mar 14 00:17:17.798206 systemd-networkd[1258]: cilium_vxlan: Link UP Mar 14 00:17:17.798378 systemd-networkd[1258]: cilium_vxlan: Gained carrier Mar 14 00:17:18.005177 kernel: NET: Registered PF_ALG protocol family Mar 14 00:17:18.357608 systemd-networkd[1258]: cilium_net: Gained IPv6LL Mar 14 00:17:18.422301 systemd-networkd[1258]: cilium_host: Gained IPv6LL Mar 14 00:17:18.619762 systemd-networkd[1258]: lxc_health: Link UP Mar 14 00:17:18.624799 systemd-networkd[1258]: lxc_health: Gained carrier Mar 14 00:17:18.869288 systemd-networkd[1258]: cilium_vxlan: Gained IPv6LL Mar 14 00:17:18.970215 systemd-networkd[1258]: lxc8b352238e5f5: Link UP Mar 14 00:17:18.981490 kernel: eth0: renamed from tmpf07f2 Mar 14 00:17:18.988310 systemd-networkd[1258]: lxc8b352238e5f5: Gained carrier Mar 14 00:17:18.997734 systemd-networkd[1258]: lxc60fd8416da10: Link UP Mar 14 00:17:19.004246 kernel: eth0: renamed from tmpb4cae Mar 14 00:17:19.016614 systemd-networkd[1258]: lxc60fd8416da10: Gained carrier Mar 14 00:17:19.560785 kubelet[2752]: I0314 00:17:19.560716 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vgsh" podStartSLOduration=8.570837535999999 podStartE2EDuration="14.560212612s" podCreationTimestamp="2026-03-14 00:17:05 +0000 UTC" firstStartedPulling="2026-03-14 00:17:05.644096509 +0000 UTC m=+7.487620207" lastFinishedPulling="2026-03-14 00:17:11.633471585 +0000 UTC m=+13.476995283" observedRunningTime="2026-03-14 00:17:16.363844117 +0000 UTC m=+18.207367855" watchObservedRunningTime="2026-03-14 00:17:19.560212612 +0000 UTC m=+21.403736311" Mar 14 00:17:20.022065 systemd-networkd[1258]: lxc_health: Gained IPv6LL Mar 14 00:17:20.725584 systemd-networkd[1258]: lxc60fd8416da10: Gained IPv6LL Mar 14 00:17:21.047568 systemd-networkd[1258]: lxc8b352238e5f5: Gained IPv6LL Mar 14 00:17:21.842185 containerd[1609]: time="2026-03-14T00:17:21.841943900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:21.842185 containerd[1609]: time="2026-03-14T00:17:21.841992685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:21.842185 containerd[1609]: time="2026-03-14T00:17:21.842003152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:21.842185 containerd[1609]: time="2026-03-14T00:17:21.842095816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:21.916169 containerd[1609]: time="2026-03-14T00:17:21.913096481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:21.916169 containerd[1609]: time="2026-03-14T00:17:21.914472345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:21.916169 containerd[1609]: time="2026-03-14T00:17:21.914485766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:21.916169 containerd[1609]: time="2026-03-14T00:17:21.914569616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:21.924207 containerd[1609]: time="2026-03-14T00:17:21.921978199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tqrgb,Uid:a7243cb5-8130-484c-a000-7edd494c67f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4cae07d61ea9d512fc0a51a2c936380a1cc634432663d6e75631d5caea9a925\"" Mar 14 00:17:21.928971 containerd[1609]: time="2026-03-14T00:17:21.928910695Z" level=info msg="CreateContainer within sandbox \"b4cae07d61ea9d512fc0a51a2c936380a1cc634432663d6e75631d5caea9a925\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:17:21.948969 containerd[1609]: time="2026-03-14T00:17:21.947319813Z" level=info msg="CreateContainer within sandbox \"b4cae07d61ea9d512fc0a51a2c936380a1cc634432663d6e75631d5caea9a925\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"467829004b621679ad8acb21a3b71d6117c3d89222d3f5eb3771b89c63b32fbf\"" Mar 14 00:17:21.949351 containerd[1609]: time="2026-03-14T00:17:21.949310370Z" level=info msg="StartContainer for \"467829004b621679ad8acb21a3b71d6117c3d89222d3f5eb3771b89c63b32fbf\"" Mar 14 00:17:22.018860 containerd[1609]: time="2026-03-14T00:17:22.018821551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4jcp5,Uid:8ed978f1-1f9f-4125-ad92-8ba8bd69a012,Namespace:kube-system,Attempt:0,} returns sandbox id \"f07f2f78d505cbabc908d20e83b215711e0620bece7ba37fe78e16ab747c88c7\"" Mar 14 00:17:22.025188 containerd[1609]: time="2026-03-14T00:17:22.024955695Z" level=info msg="CreateContainer within sandbox \"f07f2f78d505cbabc908d20e83b215711e0620bece7ba37fe78e16ab747c88c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:17:22.038520 containerd[1609]: time="2026-03-14T00:17:22.038419205Z" level=info msg="StartContainer for \"467829004b621679ad8acb21a3b71d6117c3d89222d3f5eb3771b89c63b32fbf\" returns successfully" Mar 14 00:17:22.046738 containerd[1609]: time="2026-03-14T00:17:22.046561750Z" level=info msg="CreateContainer within sandbox \"f07f2f78d505cbabc908d20e83b215711e0620bece7ba37fe78e16ab747c88c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f931665741bd9f9b77747b2ffa2d8a13b666fe4c4f99e826a1e6bd5be67ad32\"" Mar 14 00:17:22.048692 containerd[1609]: time="2026-03-14T00:17:22.048669866Z" level=info msg="StartContainer for \"4f931665741bd9f9b77747b2ffa2d8a13b666fe4c4f99e826a1e6bd5be67ad32\"" Mar 14 00:17:22.114936 containerd[1609]: time="2026-03-14T00:17:22.114739692Z" level=info msg="StartContainer for \"4f931665741bd9f9b77747b2ffa2d8a13b666fe4c4f99e826a1e6bd5be67ad32\" returns successfully" Mar 14 00:17:22.380250 kubelet[2752]: I0314 00:17:22.378772 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tqrgb" podStartSLOduration=17.378749488 podStartE2EDuration="17.378749488s" podCreationTimestamp="2026-03-14 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:22.376253882 +0000 UTC m=+24.219777640" watchObservedRunningTime="2026-03-14 00:17:22.378749488 +0000 UTC m=+24.222273216" Mar 14 00:17:22.417940 kubelet[2752]: I0314 00:17:22.417372 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4jcp5" podStartSLOduration=17.417355073 podStartE2EDuration="17.417355073s" podCreationTimestamp="2026-03-14 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:22.400276642 +0000 UTC m=+24.243800370" watchObservedRunningTime="2026-03-14 00:17:22.417355073 +0000 UTC m=+24.260878771" Mar 14 00:17:30.073809 kubelet[2752]: I0314 00:17:30.072870 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:20:17.961010 systemd[1]: Started sshd@7-204.168.138.0:22-68.220.241.50:49906.service - OpenSSH per-connection server daemon (68.220.241.50:49906). Mar 14 00:20:18.720794 sshd[4161]: Accepted publickey for core from 68.220.241.50 port 49906 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:18.721517 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:18.727096 systemd-logind[1588]: New session 8 of user core. Mar 14 00:20:18.733524 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:20:19.315245 sshd[4161]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:19.321387 systemd[1]: sshd@7-204.168.138.0:22-68.220.241.50:49906.service: Deactivated successfully. Mar 14 00:20:19.331520 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:20:19.336427 systemd-logind[1588]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:20:19.338720 systemd-logind[1588]: Removed session 8. Mar 14 00:20:24.442374 systemd[1]: Started sshd@8-204.168.138.0:22-68.220.241.50:56540.service - OpenSSH per-connection server daemon (68.220.241.50:56540). Mar 14 00:20:25.193540 sshd[4176]: Accepted publickey for core from 68.220.241.50 port 56540 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:25.196507 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:25.202458 systemd-logind[1588]: New session 9 of user core. Mar 14 00:20:25.209844 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:20:25.767903 sshd[4176]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:25.771330 systemd[1]: sshd@8-204.168.138.0:22-68.220.241.50:56540.service: Deactivated successfully. Mar 14 00:20:25.775960 systemd-logind[1588]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:20:25.776971 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:20:25.778666 systemd-logind[1588]: Removed session 9. Mar 14 00:20:30.894539 systemd[1]: Started sshd@9-204.168.138.0:22-68.220.241.50:56552.service - OpenSSH per-connection server daemon (68.220.241.50:56552). Mar 14 00:20:31.633958 sshd[4191]: Accepted publickey for core from 68.220.241.50 port 56552 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:31.636796 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:31.645713 systemd-logind[1588]: New session 10 of user core. Mar 14 00:20:31.653976 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:20:32.238233 sshd[4191]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:32.245570 systemd-logind[1588]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:20:32.246837 systemd[1]: sshd@9-204.168.138.0:22-68.220.241.50:56552.service: Deactivated successfully. Mar 14 00:20:32.254457 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:20:32.256499 systemd-logind[1588]: Removed session 10. Mar 14 00:20:32.369016 systemd[1]: Started sshd@10-204.168.138.0:22-68.220.241.50:37260.service - OpenSSH per-connection server daemon (68.220.241.50:37260). Mar 14 00:20:33.115281 sshd[4206]: Accepted publickey for core from 68.220.241.50 port 37260 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:33.117129 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:33.122207 systemd-logind[1588]: New session 11 of user core. Mar 14 00:20:33.126473 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:20:33.746822 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:33.750504 systemd[1]: sshd@10-204.168.138.0:22-68.220.241.50:37260.service: Deactivated successfully. Mar 14 00:20:33.754837 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:20:33.756502 systemd-logind[1588]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:20:33.757319 systemd-logind[1588]: Removed session 11. Mar 14 00:20:33.871382 systemd[1]: Started sshd@11-204.168.138.0:22-68.220.241.50:37262.service - OpenSSH per-connection server daemon (68.220.241.50:37262). Mar 14 00:20:34.604506 sshd[4218]: Accepted publickey for core from 68.220.241.50 port 37262 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:34.605727 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:34.611616 systemd-logind[1588]: New session 12 of user core. Mar 14 00:20:34.626780 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:20:35.210931 sshd[4218]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:35.217481 systemd[1]: sshd@11-204.168.138.0:22-68.220.241.50:37262.service: Deactivated successfully. Mar 14 00:20:35.226093 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:20:35.228661 systemd-logind[1588]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:20:35.230819 systemd-logind[1588]: Removed session 12. Mar 14 00:20:40.339211 systemd[1]: Started sshd@12-204.168.138.0:22-68.220.241.50:37266.service - OpenSSH per-connection server daemon (68.220.241.50:37266). Mar 14 00:20:41.102908 sshd[4234]: Accepted publickey for core from 68.220.241.50 port 37266 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:41.103592 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:41.108705 systemd-logind[1588]: New session 13 of user core. Mar 14 00:20:41.113720 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:20:41.694343 sshd[4234]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:41.698068 systemd[1]: sshd@12-204.168.138.0:22-68.220.241.50:37266.service: Deactivated successfully. Mar 14 00:20:41.703555 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:20:41.705022 systemd-logind[1588]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:20:41.706564 systemd-logind[1588]: Removed session 13. Mar 14 00:20:41.827053 systemd[1]: Started sshd@13-204.168.138.0:22-68.220.241.50:37276.service - OpenSSH per-connection server daemon (68.220.241.50:37276). Mar 14 00:20:42.561298 sshd[4248]: Accepted publickey for core from 68.220.241.50 port 37276 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:42.564070 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:42.570906 systemd-logind[1588]: New session 14 of user core. Mar 14 00:20:42.575524 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:20:43.210767 sshd[4248]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:43.216088 systemd[1]: sshd@13-204.168.138.0:22-68.220.241.50:37276.service: Deactivated successfully. Mar 14 00:20:43.223415 systemd-logind[1588]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:20:43.224809 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:20:43.226376 systemd-logind[1588]: Removed session 14. Mar 14 00:20:43.338560 systemd[1]: Started sshd@14-204.168.138.0:22-68.220.241.50:46796.service - OpenSSH per-connection server daemon (68.220.241.50:46796). Mar 14 00:20:44.085424 sshd[4260]: Accepted publickey for core from 68.220.241.50 port 46796 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:44.087389 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:44.096019 systemd-logind[1588]: New session 15 of user core. Mar 14 00:20:44.101649 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:20:45.281097 sshd[4260]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:45.288204 systemd[1]: sshd@14-204.168.138.0:22-68.220.241.50:46796.service: Deactivated successfully. Mar 14 00:20:45.296366 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:20:45.299736 systemd-logind[1588]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:20:45.302216 systemd-logind[1588]: Removed session 15. Mar 14 00:20:45.414234 systemd[1]: Started sshd@15-204.168.138.0:22-68.220.241.50:46802.service - OpenSSH per-connection server daemon (68.220.241.50:46802). Mar 14 00:20:46.157202 sshd[4279]: Accepted publickey for core from 68.220.241.50 port 46802 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:46.160513 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:46.168387 systemd-logind[1588]: New session 16 of user core. Mar 14 00:20:46.174715 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:20:46.849650 sshd[4279]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:46.857620 systemd-logind[1588]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:20:46.860714 systemd[1]: sshd@15-204.168.138.0:22-68.220.241.50:46802.service: Deactivated successfully. Mar 14 00:20:46.865131 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:20:46.866471 systemd-logind[1588]: Removed session 16. Mar 14 00:20:46.976218 systemd[1]: Started sshd@16-204.168.138.0:22-68.220.241.50:46816.service - OpenSSH per-connection server daemon (68.220.241.50:46816). Mar 14 00:20:47.717867 sshd[4291]: Accepted publickey for core from 68.220.241.50 port 46816 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:47.721189 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:47.729164 systemd-logind[1588]: New session 17 of user core. Mar 14 00:20:47.738875 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:20:48.285746 sshd[4291]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:48.290591 systemd[1]: sshd@16-204.168.138.0:22-68.220.241.50:46816.service: Deactivated successfully. Mar 14 00:20:48.295927 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:20:48.296056 systemd-logind[1588]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:20:48.299021 systemd-logind[1588]: Removed session 17. Mar 14 00:20:53.415385 systemd[1]: Started sshd@17-204.168.138.0:22-68.220.241.50:48012.service - OpenSSH per-connection server daemon (68.220.241.50:48012). Mar 14 00:20:54.178207 sshd[4307]: Accepted publickey for core from 68.220.241.50 port 48012 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:20:54.180336 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:54.191030 systemd-logind[1588]: New session 18 of user core. Mar 14 00:20:54.197687 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:20:54.755196 sshd[4307]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:54.759869 systemd[1]: sshd@17-204.168.138.0:22-68.220.241.50:48012.service: Deactivated successfully. Mar 14 00:20:54.768368 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:20:54.771110 systemd-logind[1588]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:20:54.773346 systemd-logind[1588]: Removed session 18. Mar 14 00:20:59.880564 systemd[1]: Started sshd@18-204.168.138.0:22-68.220.241.50:48024.service - OpenSSH per-connection server daemon (68.220.241.50:48024). Mar 14 00:21:00.634089 sshd[4323]: Accepted publickey for core from 68.220.241.50 port 48024 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:21:00.637081 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:00.643845 systemd-logind[1588]: New session 19 of user core. Mar 14 00:21:00.646549 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:21:01.230046 sshd[4323]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:01.235048 systemd[1]: sshd@18-204.168.138.0:22-68.220.241.50:48024.service: Deactivated successfully. Mar 14 00:21:01.240872 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:21:01.241431 systemd-logind[1588]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:21:01.244018 systemd-logind[1588]: Removed session 19. Mar 14 00:21:01.356097 systemd[1]: Started sshd@19-204.168.138.0:22-68.220.241.50:48034.service - OpenSSH per-connection server daemon (68.220.241.50:48034). Mar 14 00:21:02.097838 sshd[4337]: Accepted publickey for core from 68.220.241.50 port 48034 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:21:02.099095 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:02.106201 systemd-logind[1588]: New session 20 of user core. Mar 14 00:21:02.111613 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:21:03.819472 containerd[1609]: time="2026-03-14T00:21:03.819421459Z" level=info msg="StopContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" with timeout 30 (s)" Mar 14 00:21:03.822392 containerd[1609]: time="2026-03-14T00:21:03.822355360Z" level=info msg="Stop container \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" with signal terminated" Mar 14 00:21:03.838514 systemd[1]: run-containerd-runc-k8s.io-21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465-runc.lK5QJk.mount: Deactivated successfully. Mar 14 00:21:03.853412 containerd[1609]: time="2026-03-14T00:21:03.853028520Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:21:03.861488 containerd[1609]: time="2026-03-14T00:21:03.861364161Z" level=info msg="StopContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" with timeout 2 (s)" Mar 14 00:21:03.862117 containerd[1609]: time="2026-03-14T00:21:03.861871535Z" level=info msg="Stop container \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" with signal terminated" Mar 14 00:21:03.869903 systemd-networkd[1258]: lxc_health: Link DOWN Mar 14 00:21:03.869913 systemd-networkd[1258]: lxc_health: Lost carrier Mar 14 00:21:03.880615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f-rootfs.mount: Deactivated successfully. Mar 14 00:21:03.907298 containerd[1609]: time="2026-03-14T00:21:03.907200178Z" level=info msg="shim disconnected" id=505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f namespace=k8s.io Mar 14 00:21:03.907298 containerd[1609]: time="2026-03-14T00:21:03.907294891Z" level=warning msg="cleaning up after shim disconnected" id=505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f namespace=k8s.io Mar 14 00:21:03.907298 containerd[1609]: time="2026-03-14T00:21:03.907305447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:03.925023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465-rootfs.mount: Deactivated successfully. Mar 14 00:21:03.930033 containerd[1609]: time="2026-03-14T00:21:03.929941757Z" level=info msg="shim disconnected" id=21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465 namespace=k8s.io Mar 14 00:21:03.930033 containerd[1609]: time="2026-03-14T00:21:03.929993715Z" level=warning msg="cleaning up after shim disconnected" id=21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465 namespace=k8s.io Mar 14 00:21:03.930033 containerd[1609]: time="2026-03-14T00:21:03.930001577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:03.931346 containerd[1609]: time="2026-03-14T00:21:03.931222276Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:21:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:21:03.934485 containerd[1609]: time="2026-03-14T00:21:03.934445433Z" level=info msg="StopContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" returns successfully" Mar 14 00:21:03.936243 containerd[1609]: time="2026-03-14T00:21:03.936086085Z" level=info msg="StopPodSandbox for \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\"" Mar 14 00:21:03.936243 containerd[1609]: time="2026-03-14T00:21:03.936170092Z" level=info msg="Container to stop \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:03.940863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31-shm.mount: Deactivated successfully. Mar 14 00:21:03.959798 containerd[1609]: time="2026-03-14T00:21:03.959414307Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:21:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:21:03.963595 containerd[1609]: time="2026-03-14T00:21:03.963192751Z" level=info msg="StopContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" returns successfully" Mar 14 00:21:03.964301 containerd[1609]: time="2026-03-14T00:21:03.964098687Z" level=info msg="StopPodSandbox for \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\"" Mar 14 00:21:03.964301 containerd[1609]: time="2026-03-14T00:21:03.964253550Z" level=info msg="Container to stop \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:03.964301 containerd[1609]: time="2026-03-14T00:21:03.964287421Z" level=info msg="Container to stop \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:03.964301 containerd[1609]: time="2026-03-14T00:21:03.964296975Z" level=info msg="Container to stop \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:03.964960 containerd[1609]: time="2026-03-14T00:21:03.964464147Z" level=info msg="Container to stop \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:03.964960 containerd[1609]: time="2026-03-14T00:21:03.964480201Z" level=info msg="Container to stop \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:21:04.003913 containerd[1609]: time="2026-03-14T00:21:04.003767892Z" level=info msg="shim disconnected" id=edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31 namespace=k8s.io Mar 14 00:21:04.003913 containerd[1609]: time="2026-03-14T00:21:04.003866932Z" level=warning msg="cleaning up after shim disconnected" id=edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31 namespace=k8s.io Mar 14 00:21:04.003913 containerd[1609]: time="2026-03-14T00:21:04.003881433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:04.005163 containerd[1609]: time="2026-03-14T00:21:04.004752686Z" level=info msg="shim disconnected" id=372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e namespace=k8s.io Mar 14 00:21:04.005163 containerd[1609]: time="2026-03-14T00:21:04.004813629Z" level=warning msg="cleaning up after shim disconnected" id=372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e namespace=k8s.io Mar 14 00:21:04.005163 containerd[1609]: time="2026-03-14T00:21:04.004822702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:04.025709 containerd[1609]: time="2026-03-14T00:21:04.025671541Z" level=info msg="TearDown network for sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" successfully" Mar 14 00:21:04.025883 containerd[1609]: time="2026-03-14T00:21:04.025858563Z" level=info msg="StopPodSandbox for \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" returns successfully" Mar 14 00:21:04.028221 containerd[1609]: time="2026-03-14T00:21:04.028199961Z" level=info msg="TearDown network for sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" successfully" Mar 14 00:21:04.028221 containerd[1609]: time="2026-03-14T00:21:04.028220231Z" level=info msg="StopPodSandbox for \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" returns successfully" Mar 14 00:21:04.110924 kubelet[2752]: I0314 00:21:04.110850 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-xtables-lock\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.110924 kubelet[2752]: I0314 00:21:04.110904 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-run\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.110924 kubelet[2752]: I0314 00:21:04.110917 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-bpf-maps\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.110924 kubelet[2752]: I0314 00:21:04.110936 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-clustermesh-secrets\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.110951 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk7dp\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-kube-api-access-tk7dp\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.110963 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-lib-modules\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.110973 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cni-path\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.110986 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hubble-tls\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.111001 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hostproc\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111665 kubelet[2752]: I0314 00:21:04.111013 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-cgroup\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111025 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-config-path\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111038 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j4kw\" (UniqueName: \"kubernetes.io/projected/fa8a6828-d767-457b-86c0-fcf8be904672-kube-api-access-4j4kw\") pod \"fa8a6828-d767-457b-86c0-fcf8be904672\" (UID: \"fa8a6828-d767-457b-86c0-fcf8be904672\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111053 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-kernel\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111078 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa8a6828-d767-457b-86c0-fcf8be904672-cilium-config-path\") pod \"fa8a6828-d767-457b-86c0-fcf8be904672\" (UID: \"fa8a6828-d767-457b-86c0-fcf8be904672\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111109 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-etc-cni-netd\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.111839 kubelet[2752]: I0314 00:21:04.111128 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-net\") pod \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\" (UID: \"40b1fc7f-6ce7-4e9b-8565-b5010ea67da5\") " Mar 14 00:21:04.112002 kubelet[2752]: I0314 00:21:04.111232 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.112002 kubelet[2752]: I0314 00:21:04.111282 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.112002 kubelet[2752]: I0314 00:21:04.111295 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.112002 kubelet[2752]: I0314 00:21:04.111307 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.112002 kubelet[2752]: I0314 00:21:04.111647 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.115174 kubelet[2752]: I0314 00:21:04.113204 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.115174 kubelet[2752]: I0314 00:21:04.113238 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cni-path" (OuterVolumeSpecName: "cni-path") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.117668 kubelet[2752]: I0314 00:21:04.117220 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hostproc" (OuterVolumeSpecName: "hostproc") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.117668 kubelet[2752]: I0314 00:21:04.117350 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-kube-api-access-tk7dp" (OuterVolumeSpecName: "kube-api-access-tk7dp") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "kube-api-access-tk7dp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:21:04.117668 kubelet[2752]: I0314 00:21:04.117414 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.119595 kubelet[2752]: I0314 00:21:04.119568 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:21:04.121483 kubelet[2752]: I0314 00:21:04.121448 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa8a6828-d767-457b-86c0-fcf8be904672-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa8a6828-d767-457b-86c0-fcf8be904672" (UID: "fa8a6828-d767-457b-86c0-fcf8be904672"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:21:04.121694 kubelet[2752]: I0314 00:21:04.121669 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:21:04.123906 kubelet[2752]: I0314 00:21:04.123881 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:21:04.124046 kubelet[2752]: I0314 00:21:04.124034 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" (UID: "40b1fc7f-6ce7-4e9b-8565-b5010ea67da5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:21:04.124525 kubelet[2752]: I0314 00:21:04.124507 2752 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8a6828-d767-457b-86c0-fcf8be904672-kube-api-access-4j4kw" (OuterVolumeSpecName: "kube-api-access-4j4kw") pod "fa8a6828-d767-457b-86c0-fcf8be904672" (UID: "fa8a6828-d767-457b-86c0-fcf8be904672"). InnerVolumeSpecName "kube-api-access-4j4kw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:21:04.212410 kubelet[2752]: I0314 00:21:04.212342 2752 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-run\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212410 kubelet[2752]: I0314 00:21:04.212384 2752 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-bpf-maps\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212410 kubelet[2752]: I0314 00:21:04.212397 2752 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-clustermesh-secrets\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212410 kubelet[2752]: I0314 00:21:04.212411 2752 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tk7dp\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-kube-api-access-tk7dp\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212410 kubelet[2752]: I0314 00:21:04.212427 2752 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-lib-modules\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212438 2752 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cni-path\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212448 2752 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hubble-tls\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212462 2752 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-hostproc\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212473 2752 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-cgroup\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212485 2752 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-cilium-config-path\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212498 2752 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4j4kw\" (UniqueName: \"kubernetes.io/projected/fa8a6828-d767-457b-86c0-fcf8be904672-kube-api-access-4j4kw\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212510 2752 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212706 kubelet[2752]: I0314 00:21:04.212519 2752 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa8a6828-d767-457b-86c0-fcf8be904672-cilium-config-path\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212908 kubelet[2752]: I0314 00:21:04.212530 2752 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-etc-cni-netd\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212908 kubelet[2752]: I0314 00:21:04.212544 2752 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-host-proc-sys-net\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.212908 kubelet[2752]: I0314 00:21:04.212555 2752 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5-xtables-lock\") on node \"ci-4081-3-6-n-968d08e397\" DevicePath \"\"" Mar 14 00:21:04.827501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31-rootfs.mount: Deactivated successfully. Mar 14 00:21:04.827669 systemd[1]: var-lib-kubelet-pods-fa8a6828\x2dd767\x2d457b\x2d86c0\x2dfcf8be904672-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4j4kw.mount: Deactivated successfully. Mar 14 00:21:04.827794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e-rootfs.mount: Deactivated successfully. Mar 14 00:21:04.827913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e-shm.mount: Deactivated successfully. Mar 14 00:21:04.828024 systemd[1]: var-lib-kubelet-pods-40b1fc7f\x2d6ce7\x2d4e9b\x2d8565\x2db5010ea67da5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtk7dp.mount: Deactivated successfully. Mar 14 00:21:04.828132 systemd[1]: var-lib-kubelet-pods-40b1fc7f\x2d6ce7\x2d4e9b\x2d8565\x2db5010ea67da5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:21:04.828260 systemd[1]: var-lib-kubelet-pods-40b1fc7f\x2d6ce7\x2d4e9b\x2d8565\x2db5010ea67da5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:21:04.849192 kubelet[2752]: I0314 00:21:04.849045 2752 scope.go:117] "RemoveContainer" containerID="505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f" Mar 14 00:21:04.851216 containerd[1609]: time="2026-03-14T00:21:04.850969848Z" level=info msg="RemoveContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\"" Mar 14 00:21:04.857904 containerd[1609]: time="2026-03-14T00:21:04.857852580Z" level=info msg="RemoveContainer for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" returns successfully" Mar 14 00:21:04.858116 kubelet[2752]: I0314 00:21:04.858083 2752 scope.go:117] "RemoveContainer" containerID="505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f" Mar 14 00:21:04.858322 containerd[1609]: time="2026-03-14T00:21:04.858281937Z" level=error msg="ContainerStatus for \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\": not found" Mar 14 00:21:04.858627 kubelet[2752]: E0314 00:21:04.858437 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\": not found" containerID="505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f" Mar 14 00:21:04.858627 kubelet[2752]: I0314 00:21:04.858472 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f"} err="failed to get container status \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"505dfb5a7301bb0b97dd400b0590705b9870ea6035e99a04f6948b464ae8ff8f\": not found" Mar 14 00:21:04.858627 kubelet[2752]: I0314 00:21:04.858510 2752 scope.go:117] "RemoveContainer" containerID="21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465" Mar 14 00:21:04.860428 containerd[1609]: time="2026-03-14T00:21:04.860388672Z" level=info msg="RemoveContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\"" Mar 14 00:21:04.866529 containerd[1609]: time="2026-03-14T00:21:04.866490087Z" level=info msg="RemoveContainer for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" returns successfully" Mar 14 00:21:04.866736 kubelet[2752]: I0314 00:21:04.866702 2752 scope.go:117] "RemoveContainer" containerID="b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80" Mar 14 00:21:04.867634 containerd[1609]: time="2026-03-14T00:21:04.867607991Z" level=info msg="RemoveContainer for \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\"" Mar 14 00:21:04.874612 containerd[1609]: time="2026-03-14T00:21:04.874536362Z" level=info msg="RemoveContainer for \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\" returns successfully" Mar 14 00:21:04.882476 kubelet[2752]: I0314 00:21:04.882299 2752 scope.go:117] "RemoveContainer" containerID="29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151" Mar 14 00:21:04.884751 containerd[1609]: time="2026-03-14T00:21:04.884582021Z" level=info msg="RemoveContainer for \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\"" Mar 14 00:21:04.889388 containerd[1609]: time="2026-03-14T00:21:04.889223876Z" level=info msg="RemoveContainer for \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\" returns successfully" Mar 14 00:21:04.889579 kubelet[2752]: I0314 00:21:04.889508 2752 scope.go:117] "RemoveContainer" containerID="fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f" Mar 14 00:21:04.890320 containerd[1609]: time="2026-03-14T00:21:04.890289011Z" level=info msg="RemoveContainer for \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\"" Mar 14 00:21:04.892972 containerd[1609]: time="2026-03-14T00:21:04.892940998Z" level=info msg="RemoveContainer for \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\" returns successfully" Mar 14 00:21:04.893102 kubelet[2752]: I0314 00:21:04.893086 2752 scope.go:117] "RemoveContainer" containerID="cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30" Mar 14 00:21:04.893963 containerd[1609]: time="2026-03-14T00:21:04.893937109Z" level=info msg="RemoveContainer for \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\"" Mar 14 00:21:04.896939 containerd[1609]: time="2026-03-14T00:21:04.896867395Z" level=info msg="RemoveContainer for \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\" returns successfully" Mar 14 00:21:04.897244 kubelet[2752]: I0314 00:21:04.897129 2752 scope.go:117] "RemoveContainer" containerID="21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465" Mar 14 00:21:04.897614 containerd[1609]: time="2026-03-14T00:21:04.897498576Z" level=error msg="ContainerStatus for \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\": not found" Mar 14 00:21:04.897718 kubelet[2752]: E0314 00:21:04.897660 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\": not found" containerID="21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465" Mar 14 00:21:04.897767 kubelet[2752]: I0314 00:21:04.897721 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465"} err="failed to get container status \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\": rpc error: code = NotFound desc = an error occurred when try to find container \"21becf0c453716bea7a76fd96ac106d431c5f507593e8412cbdbb743cedec465\": not found" Mar 14 00:21:04.897767 kubelet[2752]: I0314 00:21:04.897750 2752 scope.go:117] "RemoveContainer" containerID="b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80" Mar 14 00:21:04.898049 containerd[1609]: time="2026-03-14T00:21:04.897964388Z" level=error msg="ContainerStatus for \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\": not found" Mar 14 00:21:04.898152 kubelet[2752]: E0314 00:21:04.898115 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\": not found" containerID="b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80" Mar 14 00:21:04.898213 kubelet[2752]: I0314 00:21:04.898183 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80"} err="failed to get container status \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7800b6899d299d392e8c27b82f93b2066660bd6da04260d2840f2974418ef80\": not found" Mar 14 00:21:04.898213 kubelet[2752]: I0314 00:21:04.898209 2752 scope.go:117] "RemoveContainer" containerID="29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151" Mar 14 00:21:04.898480 containerd[1609]: time="2026-03-14T00:21:04.898442288Z" level=error msg="ContainerStatus for \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\": not found" Mar 14 00:21:04.898604 kubelet[2752]: E0314 00:21:04.898575 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\": not found" containerID="29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151" Mar 14 00:21:04.898644 kubelet[2752]: I0314 00:21:04.898607 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151"} err="failed to get container status \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\": rpc error: code = NotFound desc = an error occurred when try to find container \"29239af38dfb62c61d5ac731c4c62fc5da719830ffe534e4e752fa3ddf70a151\": not found" Mar 14 00:21:04.898644 kubelet[2752]: I0314 00:21:04.898628 2752 scope.go:117] "RemoveContainer" containerID="fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f" Mar 14 00:21:04.898834 containerd[1609]: time="2026-03-14T00:21:04.898799206Z" level=error msg="ContainerStatus for \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\": not found" Mar 14 00:21:04.898985 kubelet[2752]: E0314 00:21:04.898931 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\": not found" containerID="fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f" Mar 14 00:21:04.899030 kubelet[2752]: I0314 00:21:04.898984 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f"} err="failed to get container status \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe549b3f62e1e644c497e0d9a645fd4a666f67ac5bf1787d3b1bfb636dbaeb2f\": not found" Mar 14 00:21:04.899030 kubelet[2752]: I0314 00:21:04.899003 2752 scope.go:117] "RemoveContainer" containerID="cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30" Mar 14 00:21:04.899208 containerd[1609]: time="2026-03-14T00:21:04.899172379Z" level=error msg="ContainerStatus for \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\": not found" Mar 14 00:21:04.899351 kubelet[2752]: E0314 00:21:04.899322 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\": not found" containerID="cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30" Mar 14 00:21:04.899351 kubelet[2752]: I0314 00:21:04.899349 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30"} err="failed to get container status \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf0d7fde4b3c2864f169031abe9fa26458a9ace6b41e9f7ee2ff69a178011c30\": not found" Mar 14 00:21:05.879241 sshd[4337]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:05.889892 systemd-logind[1588]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:21:05.891617 systemd[1]: sshd@19-204.168.138.0:22-68.220.241.50:48034.service: Deactivated successfully. Mar 14 00:21:05.899284 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:21:05.902810 systemd-logind[1588]: Removed session 20. Mar 14 00:21:06.006665 systemd[1]: Started sshd@20-204.168.138.0:22-68.220.241.50:40650.service - OpenSSH per-connection server daemon (68.220.241.50:40650). Mar 14 00:21:06.254475 kubelet[2752]: I0314 00:21:06.253878 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b1fc7f-6ce7-4e9b-8565-b5010ea67da5" path="/var/lib/kubelet/pods/40b1fc7f-6ce7-4e9b-8565-b5010ea67da5/volumes" Mar 14 00:21:06.255668 kubelet[2752]: I0314 00:21:06.255609 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8a6828-d767-457b-86c0-fcf8be904672" path="/var/lib/kubelet/pods/fa8a6828-d767-457b-86c0-fcf8be904672/volumes" Mar 14 00:21:06.772492 sshd[4507]: Accepted publickey for core from 68.220.241.50 port 40650 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:21:06.776060 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:06.785871 systemd-logind[1588]: New session 21 of user core. Mar 14 00:21:06.791796 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:21:07.669499 kubelet[2752]: E0314 00:21:07.669375 2752 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-6-n-968d08e397\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-968d08e397' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Mar 14 00:21:07.669499 kubelet[2752]: I0314 00:21:07.669420 2752 status_manager.go:895] "Failed to get status for pod" podUID="bd1b4eef-9f39-4300-bee0-c7630c96c76d" pod="kube-system/cilium-wq25t" err="pods \"cilium-wq25t\" is forbidden: User \"system:node:ci-4081-3-6-n-968d08e397\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-968d08e397' and this object" Mar 14 00:21:07.801606 sshd[4507]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:07.809489 systemd-logind[1588]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:21:07.812169 systemd[1]: sshd@20-204.168.138.0:22-68.220.241.50:40650.service: Deactivated successfully. Mar 14 00:21:07.822079 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:21:07.827741 systemd-logind[1588]: Removed session 21. Mar 14 00:21:07.840456 kubelet[2752]: I0314 00:21:07.840382 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1b4eef-9f39-4300-bee0-c7630c96c76d-clustermesh-secrets\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840456 kubelet[2752]: I0314 00:21:07.840459 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1b4eef-9f39-4300-bee0-c7630c96c76d-hubble-tls\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840487 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-bpf-maps\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840512 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-hostproc\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840537 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1b4eef-9f39-4300-bee0-c7630c96c76d-cilium-config-path\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840562 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-cni-path\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840590 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-xtables-lock\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840673 kubelet[2752]: I0314 00:21:07.840622 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd1b4eef-9f39-4300-bee0-c7630c96c76d-cilium-ipsec-secrets\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840668 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-etc-cni-netd\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840695 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-lib-modules\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840721 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-host-proc-sys-kernel\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840749 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-cilium-run\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840774 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-cilium-cgroup\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.840964 kubelet[2752]: I0314 00:21:07.840802 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1b4eef-9f39-4300-bee0-c7630c96c76d-host-proc-sys-net\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.841190 kubelet[2752]: I0314 00:21:07.840828 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwmx\" (UniqueName: \"kubernetes.io/projected/bd1b4eef-9f39-4300-bee0-c7630c96c76d-kube-api-access-ggwmx\") pod \"cilium-wq25t\" (UID: \"bd1b4eef-9f39-4300-bee0-c7630c96c76d\") " pod="kube-system/cilium-wq25t" Mar 14 00:21:07.932083 systemd[1]: Started sshd@21-204.168.138.0:22-68.220.241.50:40664.service - OpenSSH per-connection server daemon (68.220.241.50:40664). Mar 14 00:21:08.374167 kubelet[2752]: E0314 00:21:08.374012 2752 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:21:08.673030 sshd[4520]: Accepted publickey for core from 68.220.241.50 port 40664 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:21:08.675082 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:08.680191 systemd-logind[1588]: New session 22 of user core. Mar 14 00:21:08.685578 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:21:09.174176 containerd[1609]: time="2026-03-14T00:21:09.174102203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq25t,Uid:bd1b4eef-9f39-4300-bee0-c7630c96c76d,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:09.195560 sshd[4520]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:09.199719 systemd[1]: sshd@21-204.168.138.0:22-68.220.241.50:40664.service: Deactivated successfully. Mar 14 00:21:09.201457 systemd-logind[1588]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:21:09.213536 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:21:09.214886 systemd-logind[1588]: Removed session 22. Mar 14 00:21:09.215787 containerd[1609]: time="2026-03-14T00:21:09.215564384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:09.215787 containerd[1609]: time="2026-03-14T00:21:09.215624494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:09.215787 containerd[1609]: time="2026-03-14T00:21:09.215637984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:09.216030 containerd[1609]: time="2026-03-14T00:21:09.215985368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:09.256563 containerd[1609]: time="2026-03-14T00:21:09.256503897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq25t,Uid:bd1b4eef-9f39-4300-bee0-c7630c96c76d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\"" Mar 14 00:21:09.263666 containerd[1609]: time="2026-03-14T00:21:09.263618694Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:21:09.275783 containerd[1609]: time="2026-03-14T00:21:09.275740239Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77b612690bd5ba00b84392f1f52cd2c9913f4ba56868ef48da27fb1e7d20b2e9\"" Mar 14 00:21:09.276529 containerd[1609]: time="2026-03-14T00:21:09.276500465Z" level=info msg="StartContainer for \"77b612690bd5ba00b84392f1f52cd2c9913f4ba56868ef48da27fb1e7d20b2e9\"" Mar 14 00:21:09.322703 systemd[1]: Started sshd@22-204.168.138.0:22-68.220.241.50:40668.service - OpenSSH per-connection server daemon (68.220.241.50:40668). Mar 14 00:21:09.340525 containerd[1609]: time="2026-03-14T00:21:09.340465233Z" level=info msg="StartContainer for \"77b612690bd5ba00b84392f1f52cd2c9913f4ba56868ef48da27fb1e7d20b2e9\" returns successfully" Mar 14 00:21:09.377927 containerd[1609]: time="2026-03-14T00:21:09.377865770Z" level=info msg="shim disconnected" id=77b612690bd5ba00b84392f1f52cd2c9913f4ba56868ef48da27fb1e7d20b2e9 namespace=k8s.io Mar 14 00:21:09.377927 containerd[1609]: time="2026-03-14T00:21:09.377916486Z" level=warning msg="cleaning up after shim disconnected" id=77b612690bd5ba00b84392f1f52cd2c9913f4ba56868ef48da27fb1e7d20b2e9 namespace=k8s.io Mar 14 00:21:09.377927 containerd[1609]: time="2026-03-14T00:21:09.377924718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:09.389531 containerd[1609]: time="2026-03-14T00:21:09.389477466Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:21:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:21:09.878523 containerd[1609]: time="2026-03-14T00:21:09.878421491Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:21:09.896690 containerd[1609]: time="2026-03-14T00:21:09.894884171Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7adc61833bcb4dc45f73a884501889544f9197fcd2b02055c440cec8fd3e3bda\"" Mar 14 00:21:09.899128 containerd[1609]: time="2026-03-14T00:21:09.899080036Z" level=info msg="StartContainer for \"7adc61833bcb4dc45f73a884501889544f9197fcd2b02055c440cec8fd3e3bda\"" Mar 14 00:21:09.968569 containerd[1609]: time="2026-03-14T00:21:09.968398555Z" level=info msg="StartContainer for \"7adc61833bcb4dc45f73a884501889544f9197fcd2b02055c440cec8fd3e3bda\" returns successfully" Mar 14 00:21:09.999108 containerd[1609]: time="2026-03-14T00:21:09.999033283Z" level=info msg="shim disconnected" id=7adc61833bcb4dc45f73a884501889544f9197fcd2b02055c440cec8fd3e3bda namespace=k8s.io Mar 14 00:21:09.999108 containerd[1609]: time="2026-03-14T00:21:09.999095977Z" level=warning msg="cleaning up after shim disconnected" id=7adc61833bcb4dc45f73a884501889544f9197fcd2b02055c440cec8fd3e3bda namespace=k8s.io Mar 14 00:21:09.999108 containerd[1609]: time="2026-03-14T00:21:09.999104850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:10.012665 containerd[1609]: time="2026-03-14T00:21:10.012608971Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:21:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:21:10.076809 sshd[4600]: Accepted publickey for core from 68.220.241.50 port 40668 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:21:10.078334 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:10.084027 systemd-logind[1588]: New session 23 of user core. Mar 14 00:21:10.092731 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:21:10.878577 containerd[1609]: time="2026-03-14T00:21:10.878176177Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:21:10.903656 containerd[1609]: time="2026-03-14T00:21:10.903593511Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a\"" Mar 14 00:21:10.904311 containerd[1609]: time="2026-03-14T00:21:10.904274297Z" level=info msg="StartContainer for \"b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a\"" Mar 14 00:21:10.977878 containerd[1609]: time="2026-03-14T00:21:10.976895525Z" level=info msg="StartContainer for \"b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a\" returns successfully" Mar 14 00:21:11.006119 containerd[1609]: time="2026-03-14T00:21:11.006064556Z" level=info msg="shim disconnected" id=b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a namespace=k8s.io Mar 14 00:21:11.006390 containerd[1609]: time="2026-03-14T00:21:11.006355364Z" level=warning msg="cleaning up after shim disconnected" id=b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a namespace=k8s.io Mar 14 00:21:11.006390 containerd[1609]: time="2026-03-14T00:21:11.006371709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:11.189763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b43260c7d51b94bc6aa8baaa0f63012e124844c544da7e6349e58cac15966f7a-rootfs.mount: Deactivated successfully. Mar 14 00:21:11.888920 containerd[1609]: time="2026-03-14T00:21:11.888858457Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:21:11.913786 containerd[1609]: time="2026-03-14T00:21:11.911067697Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2\"" Mar 14 00:21:11.917348 containerd[1609]: time="2026-03-14T00:21:11.914152418Z" level=info msg="StartContainer for \"38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2\"" Mar 14 00:21:11.923542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697463146.mount: Deactivated successfully. Mar 14 00:21:12.023784 containerd[1609]: time="2026-03-14T00:21:12.023570975Z" level=info msg="StartContainer for \"38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2\" returns successfully" Mar 14 00:21:12.049000 containerd[1609]: time="2026-03-14T00:21:12.048920613Z" level=info msg="shim disconnected" id=38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2 namespace=k8s.io Mar 14 00:21:12.049000 containerd[1609]: time="2026-03-14T00:21:12.048995426Z" level=warning msg="cleaning up after shim disconnected" id=38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2 namespace=k8s.io Mar 14 00:21:12.049000 containerd[1609]: time="2026-03-14T00:21:12.049003688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:12.061361 containerd[1609]: time="2026-03-14T00:21:12.061290998Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:21:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:21:12.186100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38e049d79c8141d219101191eafe23a5fcf34d3124f06d5332d84dcfd998ddc2-rootfs.mount: Deactivated successfully. Mar 14 00:21:12.889171 containerd[1609]: time="2026-03-14T00:21:12.889080610Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:21:12.909112 containerd[1609]: time="2026-03-14T00:21:12.909064977Z" level=info msg="CreateContainer within sandbox \"d8eb038ecfdb4964a0e27b9fccadac5a2d4e3787ebe9f357a7cc6ddb29e60e81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f60b805722422e57ce6250767159fe19103b1daa65ea47d88ebca9bb298e0f37\"" Mar 14 00:21:12.909933 containerd[1609]: time="2026-03-14T00:21:12.909848859Z" level=info msg="StartContainer for \"f60b805722422e57ce6250767159fe19103b1daa65ea47d88ebca9bb298e0f37\"" Mar 14 00:21:12.975002 containerd[1609]: time="2026-03-14T00:21:12.974949513Z" level=info msg="StartContainer for \"f60b805722422e57ce6250767159fe19103b1daa65ea47d88ebca9bb298e0f37\" returns successfully" Mar 14 00:21:13.381169 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:21:13.913386 kubelet[2752]: I0314 00:21:13.912363 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wq25t" podStartSLOduration=6.912337477 podStartE2EDuration="6.912337477s" podCreationTimestamp="2026-03-14 00:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:13.91155722 +0000 UTC m=+255.755080948" watchObservedRunningTime="2026-03-14 00:21:13.912337477 +0000 UTC m=+255.755861205" Mar 14 00:21:16.160497 systemd-networkd[1258]: lxc_health: Link UP Mar 14 00:21:16.168291 systemd-networkd[1258]: lxc_health: Gained carrier Mar 14 00:21:16.922604 systemd[1]: run-containerd-runc-k8s.io-f60b805722422e57ce6250767159fe19103b1daa65ea47d88ebca9bb298e0f37-runc.zNJCvD.mount: Deactivated successfully. Mar 14 00:21:17.400257 systemd-networkd[1258]: lxc_health: Gained IPv6LL Mar 14 00:21:23.562874 sshd[4600]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:23.574557 systemd[1]: sshd@22-204.168.138.0:22-68.220.241.50:40668.service: Deactivated successfully. Mar 14 00:21:23.582078 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:21:23.583409 systemd-logind[1588]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:21:23.584436 systemd-logind[1588]: Removed session 23. Mar 14 00:21:40.186507 kubelet[2752]: E0314 00:21:40.186456 2752 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42478->10.0.0.2:2379: read: connection timed out" Mar 14 00:21:40.216377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26-rootfs.mount: Deactivated successfully. Mar 14 00:21:40.234988 containerd[1609]: time="2026-03-14T00:21:40.234877389Z" level=info msg="shim disconnected" id=5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26 namespace=k8s.io Mar 14 00:21:40.234988 containerd[1609]: time="2026-03-14T00:21:40.234973404Z" level=warning msg="cleaning up after shim disconnected" id=5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26 namespace=k8s.io Mar 14 00:21:40.235540 containerd[1609]: time="2026-03-14T00:21:40.234986404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:40.954634 kubelet[2752]: I0314 00:21:40.954273 2752 scope.go:117] "RemoveContainer" containerID="5eefefd9abbbef2091549bd9a44ccd0ba1d779452eb7b31f5d127fdb0e344c26" Mar 14 00:21:40.958041 containerd[1609]: time="2026-03-14T00:21:40.957961750Z" level=info msg="CreateContainer within sandbox \"ba0350828ab428cc01504b3385b2127c3708f5213886128cb9b2fd71cdd41c35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:21:40.981031 containerd[1609]: time="2026-03-14T00:21:40.980944192Z" level=info msg="CreateContainer within sandbox \"ba0350828ab428cc01504b3385b2127c3708f5213886128cb9b2fd71cdd41c35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f89e466f30e63f7531d4ec47dea773bd22e9b932c6edc80f842f708739d8882b\"" Mar 14 00:21:40.982186 containerd[1609]: time="2026-03-14T00:21:40.981853116Z" level=info msg="StartContainer for \"f89e466f30e63f7531d4ec47dea773bd22e9b932c6edc80f842f708739d8882b\"" Mar 14 00:21:41.079283 containerd[1609]: time="2026-03-14T00:21:41.079233448Z" level=info msg="StartContainer for \"f89e466f30e63f7531d4ec47dea773bd22e9b932c6edc80f842f708739d8882b\" returns successfully" Mar 14 00:21:44.984220 kubelet[2752]: E0314 00:21:44.983910 2752 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42326->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-968d08e397.189c8d48327ca0cb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-968d08e397,UID:21bb3c4b8581028b0cbca34028c3f070,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-968d08e397,},FirstTimestamp:2026-03-14 00:21:34.501322955 +0000 UTC m=+276.344846653,LastTimestamp:2026-03-14 00:21:34.501322955 +0000 UTC m=+276.344846653,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-968d08e397,}" Mar 14 00:21:45.691444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8-rootfs.mount: Deactivated successfully. Mar 14 00:21:45.699044 containerd[1609]: time="2026-03-14T00:21:45.697747154Z" level=info msg="shim disconnected" id=8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8 namespace=k8s.io Mar 14 00:21:45.699044 containerd[1609]: time="2026-03-14T00:21:45.697820184Z" level=warning msg="cleaning up after shim disconnected" id=8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8 namespace=k8s.io Mar 14 00:21:45.699044 containerd[1609]: time="2026-03-14T00:21:45.697837350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:45.967594 kubelet[2752]: I0314 00:21:45.967426 2752 scope.go:117] "RemoveContainer" containerID="8efaa20eb1cee687c42898c220293e57c4a66f0b4410ba16f44d2c7cdd06ccc8" Mar 14 00:21:45.968818 containerd[1609]: time="2026-03-14T00:21:45.968786263Z" level=info msg="CreateContainer within sandbox \"fd63024fde4cdd7c3377305161cd828f284b73d14e62d551b18dbf75b9be4e0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:21:45.982778 containerd[1609]: time="2026-03-14T00:21:45.982742574Z" level=info msg="CreateContainer within sandbox \"fd63024fde4cdd7c3377305161cd828f284b73d14e62d551b18dbf75b9be4e0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5917f464a99f021a54eee51a1d00f0d858822a725a4e1c81d083870b337e9159\"" Mar 14 00:21:45.983106 containerd[1609]: time="2026-03-14T00:21:45.983077519Z" level=info msg="StartContainer for \"5917f464a99f021a54eee51a1d00f0d858822a725a4e1c81d083870b337e9159\"" Mar 14 00:21:46.043078 containerd[1609]: time="2026-03-14T00:21:46.043040560Z" level=info msg="StartContainer for \"5917f464a99f021a54eee51a1d00f0d858822a725a4e1c81d083870b337e9159\" returns successfully" Mar 14 00:21:50.187120 kubelet[2752]: E0314 00:21:50.186771 2752 controller.go:195] "Failed to update lease" err="Put \"https://204.168.138.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-968d08e397?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:21:51.191895 kubelet[2752]: I0314 00:21:51.191620 2752 status_manager.go:895] "Failed to get status for pod" podUID="1f478f82dd90d2e9887486ed4acb60ec" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-968d08e397" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42390->10.0.0.2:2379: read: connection timed out" Mar 14 00:21:58.258775 containerd[1609]: time="2026-03-14T00:21:58.258636167Z" level=info msg="StopPodSandbox for \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\"" Mar 14 00:21:58.258775 containerd[1609]: time="2026-03-14T00:21:58.258719543Z" level=info msg="TearDown network for sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" successfully" Mar 14 00:21:58.258775 containerd[1609]: time="2026-03-14T00:21:58.258728186Z" level=info msg="StopPodSandbox for \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" returns successfully" Mar 14 00:21:58.259356 containerd[1609]: time="2026-03-14T00:21:58.259085236Z" level=info msg="RemovePodSandbox for \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\"" Mar 14 00:21:58.259356 containerd[1609]: time="2026-03-14T00:21:58.259110044Z" level=info msg="Forcibly stopping sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\"" Mar 14 00:21:58.259356 containerd[1609]: time="2026-03-14T00:21:58.259194491Z" level=info msg="TearDown network for sandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" successfully" Mar 14 00:21:58.263010 containerd[1609]: time="2026-03-14T00:21:58.262967657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:21:58.263010 containerd[1609]: time="2026-03-14T00:21:58.263026125Z" level=info msg="RemovePodSandbox \"372b37dd18d053a5673f41562c16a8e1c161b1eed229909530917f351f230b2e\" returns successfully" Mar 14 00:21:58.263540 containerd[1609]: time="2026-03-14T00:21:58.263369193Z" level=info msg="StopPodSandbox for \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\"" Mar 14 00:21:58.263540 containerd[1609]: time="2026-03-14T00:21:58.263462205Z" level=info msg="TearDown network for sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" successfully" Mar 14 00:21:58.263540 containerd[1609]: time="2026-03-14T00:21:58.263488805Z" level=info msg="StopPodSandbox for \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" returns successfully" Mar 14 00:21:58.263760 containerd[1609]: time="2026-03-14T00:21:58.263728898Z" level=info msg="RemovePodSandbox for \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\"" Mar 14 00:21:58.263797 containerd[1609]: time="2026-03-14T00:21:58.263757501Z" level=info msg="Forcibly stopping sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\"" Mar 14 00:21:58.263829 containerd[1609]: time="2026-03-14T00:21:58.263813336Z" level=info msg="TearDown network for sandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" successfully" Mar 14 00:21:58.267032 containerd[1609]: time="2026-03-14T00:21:58.267005562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:21:58.267131 containerd[1609]: time="2026-03-14T00:21:58.267043670Z" level=info msg="RemovePodSandbox \"edda304fb174d34c51da41230a203c330a336ea1dc6aa52f8aa32094eadf6d31\" returns successfully" Mar 14 00:22:00.189944 kubelet[2752]: E0314 00:22:00.189767 2752 controller.go:195] "Failed to update lease" err="Put \"https://204.168.138.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-968d08e397?timeout=10s\": context deadline exceeded"