Mar 7 01:10:47.994095 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:10:47.994112 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:47.994121 kernel: BIOS-provided physical RAM map: Mar 7 01:10:47.994126 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:10:47.994131 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Mar 7 01:10:47.994135 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 7 01:10:47.994140 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 7 01:10:47.994145 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Mar 7 01:10:47.994149 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Mar 7 01:10:47.994154 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Mar 7 01:10:47.994158 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 7 01:10:47.994165 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 7 01:10:47.994170 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 7 01:10:47.994183 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 7 01:10:47.994188 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 7 01:10:47.994193 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:10:47.994200 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 7 01:10:47.994204 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 7 01:10:47.994209 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:10:47.994214 kernel: NX (Execute Disable) protection: active Mar 7 01:10:47.994218 kernel: APIC: Static calls initialized Mar 7 01:10:47.994223 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 7 01:10:47.994228 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Mar 7 01:10:47.994233 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 7 01:10:47.994237 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 7 01:10:47.994242 kernel: SMBIOS 3.0.0 present. Mar 7 01:10:47.994247 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 7 01:10:47.994252 kernel: Hypervisor detected: KVM Mar 7 01:10:47.994259 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:10:47.994273 kernel: kvm-clock: using sched offset of 12716328316 cycles Mar 7 01:10:47.994637 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:10:47.994644 kernel: tsc: Detected 2399.998 MHz processor Mar 7 01:10:47.994649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:10:47.994654 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:10:47.994659 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Mar 7 01:10:47.994664 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:10:47.994669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:10:47.994677 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 7 01:10:47.994682 kernel: Using GB pages for direct mapping Mar 7 01:10:47.994687 kernel: Secure boot disabled Mar 7 01:10:47.994695 kernel: ACPI: Early table checksum verification disabled Mar 7 01:10:47.994700 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 7 01:10:47.994705 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 7 01:10:47.994710 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994718 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994723 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 7 01:10:47.994728 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994733 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994738 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994743 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:10:47.994748 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 01:10:47.994755 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Mar 7 01:10:47.994760 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Mar 7 01:10:47.994765 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 7 01:10:47.994770 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Mar 7 01:10:47.994775 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Mar 7 01:10:47.994780 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Mar 7 01:10:47.994786 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Mar 7 01:10:47.994790 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Mar 7 01:10:47.994796 kernel: No NUMA configuration found Mar 7 01:10:47.994803 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Mar 7 01:10:47.994808 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Mar 7 01:10:47.994813 kernel: Zone ranges: Mar 7 01:10:47.994818 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:10:47.994823 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:10:47.994828 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Mar 7 01:10:47.994833 kernel: Movable zone start for each node Mar 7 01:10:47.994838 kernel: Early memory node ranges Mar 7 01:10:47.994843 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:10:47.994848 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Mar 7 01:10:47.994855 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Mar 7 01:10:47.994861 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Mar 7 01:10:47.994866 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Mar 7 01:10:47.994871 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Mar 7 01:10:47.994876 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:10:47.994881 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:10:47.994886 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 7 01:10:47.994891 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 01:10:47.994896 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Mar 7 01:10:47.994903 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 7 01:10:47.994908 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:10:47.994913 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:10:47.994918 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:10:47.994923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:10:47.994928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:10:47.994933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:10:47.994938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:10:47.994943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:10:47.994950 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:10:47.994955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:10:47.994960 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:10:47.994965 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:10:47.994970 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 7 01:10:47.994975 kernel: Booting paravirtualized kernel on KVM Mar 7 01:10:47.994980 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:10:47.994985 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:10:47.994990 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:10:47.994998 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:10:47.995003 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:10:47.995008 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 7 01:10:47.995013 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:47.995019 kernel: random: crng init done Mar 7 01:10:47.995024 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:10:47.995029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:10:47.995034 kernel: Fallback order for Node 0: 0 Mar 7 01:10:47.995041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Mar 7 01:10:47.995046 kernel: Policy zone: Normal Mar 7 01:10:47.995051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:10:47.995056 kernel: software IO TLB: area num 2. Mar 7 01:10:47.995061 kernel: Memory: 3819404K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 271560K reserved, 0K cma-reserved) Mar 7 01:10:47.995066 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:10:47.995071 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:10:47.995076 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:10:47.995082 kernel: Dynamic Preempt: voluntary Mar 7 01:10:47.995089 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:10:47.995095 kernel: rcu: RCU event tracing is enabled. Mar 7 01:10:47.995100 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:10:47.995105 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:10:47.995118 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:10:47.995126 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:10:47.995131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:10:47.995136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:10:47.995142 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:10:47.995147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:10:47.995152 kernel: Console: colour dummy device 80x25 Mar 7 01:10:47.995157 kernel: printk: console [tty0] enabled Mar 7 01:10:47.995165 kernel: printk: console [ttyS0] enabled Mar 7 01:10:47.995170 kernel: ACPI: Core revision 20230628 Mar 7 01:10:47.995185 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:10:47.995190 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:10:47.995195 kernel: x2apic enabled Mar 7 01:10:47.995203 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:10:47.995208 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:10:47.995214 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:10:47.995219 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Mar 7 01:10:47.995224 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:10:47.995230 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:10:47.995235 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:10:47.995240 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:10:47.995245 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 7 01:10:47.995254 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:10:47.995259 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:10:47.997226 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:10:47.997236 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Mar 7 01:10:47.997242 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:10:47.997248 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:10:47.997253 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:10:47.997258 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:10:47.997276 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:10:47.997287 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:10:47.997292 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:10:47.997298 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:10:47.997308 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:10:47.997313 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:10:47.997318 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:10:47.997323 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:10:47.997329 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:10:47.997334 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 7 01:10:47.997342 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 7 01:10:47.997348 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:10:47.997353 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:10:47.997358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:10:47.997363 kernel: landlock: Up and running. Mar 7 01:10:47.997368 kernel: SELinux: Initializing. Mar 7 01:10:47.997374 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:10:47.997379 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:10:47.997384 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Mar 7 01:10:47.997392 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:47.997397 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:47.997403 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:47.997408 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:10:47.997413 kernel: ... version: 0 Mar 7 01:10:47.997419 kernel: ... bit width: 48 Mar 7 01:10:47.997424 kernel: ... generic registers: 6 Mar 7 01:10:47.997429 kernel: ... value mask: 0000ffffffffffff Mar 7 01:10:47.997435 kernel: ... max period: 00007fffffffffff Mar 7 01:10:47.997443 kernel: ... fixed-purpose events: 0 Mar 7 01:10:47.997448 kernel: ... event mask: 000000000000003f Mar 7 01:10:47.997453 kernel: signal: max sigframe size: 3376 Mar 7 01:10:47.997459 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:10:47.997464 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:10:47.997470 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:10:47.997475 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:10:47.997481 kernel: .... node #0, CPUs: #1 Mar 7 01:10:47.997486 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:10:47.997494 kernel: smpboot: Max logical packages: 1 Mar 7 01:10:47.997499 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Mar 7 01:10:47.997504 kernel: devtmpfs: initialized Mar 7 01:10:47.997510 kernel: x86/mm: Memory block size: 128MB Mar 7 01:10:47.997515 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 7 01:10:47.997520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:10:47.997525 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:10:47.997530 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:10:47.997536 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:10:47.997543 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:10:47.997549 kernel: audit: type=2000 audit(1772845846.042:1): state=initialized audit_enabled=0 res=1 Mar 7 01:10:47.997554 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:10:47.997559 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:10:47.997564 kernel: cpuidle: using governor menu Mar 7 01:10:47.997569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:10:47.997575 kernel: dca service started, version 1.12.1 Mar 7 01:10:47.997580 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 7 01:10:47.997585 kernel: PCI: Using configuration type 1 for base access Mar 7 01:10:47.997593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:10:47.997599 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:10:47.997604 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:10:47.997609 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:10:47.997614 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:10:47.997619 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:10:47.997625 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:10:47.997630 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:10:47.997635 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:10:47.997643 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:10:47.997648 kernel: ACPI: Interpreter enabled Mar 7 01:10:47.997653 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:10:47.997658 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:10:47.997663 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:10:47.997669 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:10:47.997674 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:10:47.997679 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:10:47.997836 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:10:47.997946 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:10:47.998043 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:10:47.998049 kernel: PCI host bridge to bus 0000:00 Mar 7 01:10:47.998152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:10:47.998251 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:10:47.998351 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:10:47.998443 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 7 01:10:47.998530 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 7 01:10:47.998944 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Mar 7 01:10:47.999044 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:10:47.999154 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:10:48.000193 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 7 01:10:48.000322 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Mar 7 01:10:48.000427 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Mar 7 01:10:48.000524 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Mar 7 01:10:48.000622 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:10:48.000718 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 7 01:10:48.000814 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:10:48.000916 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.001015 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Mar 7 01:10:48.001119 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.001223 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Mar 7 01:10:48.002380 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.002485 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Mar 7 01:10:48.002589 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.002689 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Mar 7 01:10:48.002790 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.002885 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Mar 7 01:10:48.002988 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.003083 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Mar 7 01:10:48.003192 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.003300 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Mar 7 01:10:48.003408 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.003503 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Mar 7 01:10:48.003606 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 7 01:10:48.003703 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Mar 7 01:10:48.003804 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:10:48.003900 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:10:48.004005 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:10:48.004100 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Mar 7 01:10:48.004203 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Mar 7 01:10:48.006470 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:10:48.006580 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Mar 7 01:10:48.006688 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 01:10:48.006795 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Mar 7 01:10:48.006894 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Mar 7 01:10:48.006994 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 01:10:48.007090 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 01:10:48.007193 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 7 01:10:48.007300 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 7 01:10:48.007406 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 01:10:48.007510 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Mar 7 01:10:48.007606 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 01:10:48.007702 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 7 01:10:48.007808 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 7 01:10:48.007907 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Mar 7 01:10:48.008006 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Mar 7 01:10:48.008100 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 01:10:48.008205 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 7 01:10:48.011523 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 7 01:10:48.011648 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 7 01:10:48.011751 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Mar 7 01:10:48.011851 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 01:10:48.011947 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 7 01:10:48.012053 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 01:10:48.012160 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Mar 7 01:10:48.012317 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Mar 7 01:10:48.012417 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 01:10:48.012514 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 7 01:10:48.012609 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 7 01:10:48.012717 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 7 01:10:48.012817 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Mar 7 01:10:48.012920 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Mar 7 01:10:48.013016 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 01:10:48.013112 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 7 01:10:48.013216 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 7 01:10:48.013223 kernel: acpiphp: Slot [0] registered Mar 7 01:10:48.013659 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 01:10:48.013767 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Mar 7 01:10:48.013868 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 7 01:10:48.013972 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 01:10:48.014069 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 01:10:48.014164 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 7 01:10:48.014972 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 7 01:10:48.014994 kernel: acpiphp: Slot [0-2] registered Mar 7 01:10:48.015331 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 01:10:48.015653 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 7 01:10:48.015780 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 7 01:10:48.015792 kernel: acpiphp: Slot [0-3] registered Mar 7 01:10:48.015995 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 01:10:48.016126 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 7 01:10:48.016239 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 7 01:10:48.016246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:10:48.016253 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:10:48.016258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:10:48.016278 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:10:48.016289 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:10:48.016295 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:10:48.016300 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:10:48.016306 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:10:48.016311 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:10:48.016316 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:10:48.016322 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:10:48.016327 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:10:48.016332 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:10:48.016341 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:10:48.016346 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:10:48.016351 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:10:48.016356 kernel: iommu: Default domain type: Translated Mar 7 01:10:48.016362 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:10:48.016368 kernel: efivars: Registered efivars operations Mar 7 01:10:48.016373 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:10:48.016379 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:10:48.016385 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Mar 7 01:10:48.016392 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 7 01:10:48.016398 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Mar 7 01:10:48.016403 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Mar 7 01:10:48.016508 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:10:48.016610 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:10:48.016710 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:10:48.016717 kernel: vgaarb: loaded Mar 7 01:10:48.016723 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:10:48.016729 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:10:48.016737 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:10:48.016743 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:10:48.016749 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:10:48.016755 kernel: pnp: PnP ACPI init Mar 7 01:10:48.016869 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 7 01:10:48.016878 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:10:48.016883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:10:48.016889 kernel: NET: Registered PF_INET protocol family Mar 7 01:10:48.016911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:10:48.016920 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:10:48.016926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:10:48.016932 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:10:48.016937 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:10:48.016943 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:10:48.016948 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:10:48.016954 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:10:48.016960 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:10:48.016968 kernel: NET: Registered PF_XDP protocol family Mar 7 01:10:48.017080 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 7 01:10:48.017200 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 7 01:10:48.020938 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 01:10:48.021053 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 01:10:48.021155 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 01:10:48.021283 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 01:10:48.021408 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 01:10:48.021509 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 01:10:48.021610 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Mar 7 01:10:48.021708 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 01:10:48.021808 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 7 01:10:48.021903 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 7 01:10:48.022001 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 01:10:48.022097 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 7 01:10:48.022205 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 01:10:48.022316 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 7 01:10:48.022412 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 7 01:10:48.022509 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 01:10:48.022604 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 7 01:10:48.022712 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 01:10:48.022812 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 7 01:10:48.022907 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 7 01:10:48.023005 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 01:10:48.023100 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 7 01:10:48.023206 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 7 01:10:48.023325 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Mar 7 01:10:48.023427 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 01:10:48.023526 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 7 01:10:48.023621 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 7 01:10:48.023718 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 7 01:10:48.023814 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 01:10:48.023909 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 7 01:10:48.024003 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 7 01:10:48.024099 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 7 01:10:48.024202 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 01:10:48.024331 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 7 01:10:48.024430 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 7 01:10:48.024524 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 7 01:10:48.024618 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:10:48.024717 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:10:48.024835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:10:48.024924 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 7 01:10:48.025012 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 7 01:10:48.025099 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Mar 7 01:10:48.025207 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Mar 7 01:10:48.025334 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 7 01:10:48.025434 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Mar 7 01:10:48.025537 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Mar 7 01:10:48.025629 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 7 01:10:48.025728 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 7 01:10:48.025828 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Mar 7 01:10:48.025921 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 7 01:10:48.026019 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Mar 7 01:10:48.026144 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 7 01:10:48.026361 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 7 01:10:48.026475 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Mar 7 01:10:48.026576 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 7 01:10:48.026676 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 7 01:10:48.026767 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Mar 7 01:10:48.026858 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 7 01:10:48.026973 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 7 01:10:48.027067 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Mar 7 01:10:48.027159 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 7 01:10:48.027167 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:10:48.027183 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:10:48.027189 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:10:48.027195 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Mar 7 01:10:48.027201 kernel: Initialise system trusted keyrings Mar 7 01:10:48.027210 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:10:48.027215 kernel: Key type asymmetric registered Mar 7 01:10:48.027221 kernel: Asymmetric key parser 'x509' registered Mar 7 01:10:48.027227 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:10:48.027232 kernel: io scheduler mq-deadline registered Mar 7 01:10:48.027238 kernel: io scheduler kyber registered Mar 7 01:10:48.027243 kernel: io scheduler bfq registered Mar 7 01:10:48.029381 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 7 01:10:48.029489 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 7 01:10:48.029591 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 7 01:10:48.029688 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 7 01:10:48.029786 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 7 01:10:48.029882 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 7 01:10:48.029978 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 7 01:10:48.030074 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 7 01:10:48.030171 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 7 01:10:48.030301 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 7 01:10:48.030404 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 7 01:10:48.030500 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 7 01:10:48.030596 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 7 01:10:48.030691 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 7 01:10:48.030786 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 7 01:10:48.030882 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 7 01:10:48.030889 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:10:48.030985 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 7 01:10:48.031084 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 7 01:10:48.031091 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:10:48.031097 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 7 01:10:48.031105 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:10:48.031110 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:10:48.031116 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:10:48.031122 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:10:48.031128 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:10:48.031133 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:10:48.031247 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:10:48.031355 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:10:48.031447 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:10:47 UTC (1772845847) Mar 7 01:10:48.031538 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:10:48.031545 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:10:48.031551 kernel: efifb: probing for efifb Mar 7 01:10:48.031557 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Mar 7 01:10:48.031562 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 7 01:10:48.031572 kernel: efifb: scrolling: redraw Mar 7 01:10:48.031577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:10:48.031583 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 01:10:48.031589 kernel: fb0: EFI VGA frame buffer device Mar 7 01:10:48.031594 kernel: pstore: Using crash dump compression: deflate Mar 7 01:10:48.031601 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:10:48.031607 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:10:48.031612 kernel: Segment Routing with IPv6 Mar 7 01:10:48.031618 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:10:48.031627 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:10:48.031632 kernel: Key type dns_resolver registered Mar 7 01:10:48.031638 kernel: IPI shorthand broadcast: enabled Mar 7 01:10:48.031643 kernel: sched_clock: Marking stable (1343010786, 232197359)->(1637968712, -62760567) Mar 7 01:10:48.031648 kernel: registered taskstats version 1 Mar 7 01:10:48.031654 kernel: Loading compiled-in X.509 certificates Mar 7 01:10:48.031660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:10:48.031665 kernel: Key type .fscrypt registered Mar 7 01:10:48.031671 kernel: Key type fscrypt-provisioning registered Mar 7 01:10:48.031679 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:10:48.031684 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:10:48.031690 kernel: ima: No architecture policies found Mar 7 01:10:48.031695 kernel: clk: Disabling unused clocks Mar 7 01:10:48.031701 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:10:48.031707 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:10:48.031712 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:10:48.031718 kernel: Run /init as init process Mar 7 01:10:48.031723 kernel: with arguments: Mar 7 01:10:48.031731 kernel: /init Mar 7 01:10:48.031737 kernel: with environment: Mar 7 01:10:48.031742 kernel: HOME=/ Mar 7 01:10:48.031748 kernel: TERM=linux Mar 7 01:10:48.031755 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:10:48.031763 systemd[1]: Detected virtualization kvm. Mar 7 01:10:48.031769 systemd[1]: Detected architecture x86-64. Mar 7 01:10:48.031777 systemd[1]: Running in initrd. Mar 7 01:10:48.031783 systemd[1]: No hostname configured, using default hostname. Mar 7 01:10:48.031789 systemd[1]: Hostname set to . Mar 7 01:10:48.031795 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:10:48.031801 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:10:48.031806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:48.031812 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:48.031818 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:10:48.031826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:10:48.031832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:10:48.031838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:10:48.031848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:10:48.031854 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:10:48.031859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:48.031865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:48.031873 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:10:48.031879 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:10:48.031885 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:10:48.031891 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:10:48.031896 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:10:48.031902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:10:48.031908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:10:48.031914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:10:48.031922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:48.031928 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:48.031934 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:48.031940 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:10:48.031946 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:10:48.031952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:10:48.031958 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:10:48.031963 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:10:48.031969 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:10:48.031977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:10:48.031983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:48.031989 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:10:48.032012 systemd-journald[188]: Collecting audit messages is disabled. Mar 7 01:10:48.032030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:48.032036 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:10:48.032043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:10:48.032049 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:10:48.032057 kernel: Bridge firewalling registered Mar 7 01:10:48.032062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:48.032068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:48.032075 systemd-journald[188]: Journal started Mar 7 01:10:48.032087 systemd-journald[188]: Runtime Journal (/run/log/journal/b1215fbc4ceb443abfeabf61823370e0) is 8.0M, max 76.3M, 68.3M free. Mar 7 01:10:47.987669 systemd-modules-load[189]: Inserted module 'overlay' Mar 7 01:10:48.035676 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:10:48.029018 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 7 01:10:48.036304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:48.043385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:48.045393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:48.047390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:10:48.056400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:10:48.064473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:48.067588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:48.069436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:48.070443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:48.076443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:10:48.078134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:10:48.086822 dracut-cmdline[224]: dracut-dracut-053 Mar 7 01:10:48.089573 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:48.105313 systemd-resolved[228]: Positive Trust Anchors: Mar 7 01:10:48.105331 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:10:48.105353 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:10:48.109364 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 7 01:10:48.110301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:10:48.111304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:48.154303 kernel: SCSI subsystem initialized Mar 7 01:10:48.162288 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:10:48.170288 kernel: iscsi: registered transport (tcp) Mar 7 01:10:48.187744 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:10:48.187785 kernel: QLogic iSCSI HBA Driver Mar 7 01:10:48.244100 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:10:48.252452 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:10:48.279965 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:10:48.280058 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:10:48.280081 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:10:48.323334 kernel: raid6: avx512x4 gen() 42512 MB/s Mar 7 01:10:48.341324 kernel: raid6: avx512x2 gen() 45376 MB/s Mar 7 01:10:48.359314 kernel: raid6: avx512x1 gen() 40277 MB/s Mar 7 01:10:48.377309 kernel: raid6: avx2x4 gen() 46023 MB/s Mar 7 01:10:48.395312 kernel: raid6: avx2x2 gen() 43110 MB/s Mar 7 01:10:48.414391 kernel: raid6: avx2x1 gen() 38859 MB/s Mar 7 01:10:48.414436 kernel: raid6: using algorithm avx2x4 gen() 46023 MB/s Mar 7 01:10:48.434407 kernel: raid6: .... xor() 4686 MB/s, rmw enabled Mar 7 01:10:48.434438 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:10:48.473316 kernel: xor: automatically using best checksumming function avx Mar 7 01:10:48.591300 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:10:48.608676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:10:48.616503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:48.629130 systemd-udevd[411]: Using default interface naming scheme 'v255'. Mar 7 01:10:48.633119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:48.640503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:10:48.660469 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 7 01:10:48.701929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:10:48.712425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:10:48.781570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:48.788443 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:10:48.808740 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:10:48.810681 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:10:48.811152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:48.812193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:10:48.817457 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:10:48.828734 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:10:48.861446 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:10:48.864296 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:10:48.876283 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:10:48.890416 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:10:48.901951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:10:48.902448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:48.903382 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:48.904204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:48.909150 kernel: AES CTR mode by8 optimization enabled Mar 7 01:10:48.904341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:48.905418 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:48.917002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:48.933274 kernel: libata version 3.00 loaded. Mar 7 01:10:48.940321 kernel: ACPI: bus type USB registered Mar 7 01:10:48.944717 kernel: usbcore: registered new interface driver usbfs Mar 7 01:10:48.944744 kernel: usbcore: registered new interface driver hub Mar 7 01:10:48.946572 kernel: usbcore: registered new device driver usb Mar 7 01:10:48.949722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:48.959420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:48.970288 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:10:48.975288 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:10:48.979639 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 01:10:48.979819 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:10:48.979160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:48.988974 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 7 01:10:48.989132 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:10:48.989258 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 01:10:48.993097 kernel: scsi host1: ahci Mar 7 01:10:48.993144 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:10:48.993331 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 01:10:48.997780 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Mar 7 01:10:48.997955 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 7 01:10:48.998081 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:10:48.998219 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 7 01:10:48.998363 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:10:48.998485 kernel: hub 1-0:1.0: USB hub found Mar 7 01:10:49.000294 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:10:49.000440 kernel: scsi host2: ahci Mar 7 01:10:49.002297 kernel: hub 1-0:1.0: 4 ports detected Mar 7 01:10:49.005113 kernel: scsi host3: ahci Mar 7 01:10:49.005154 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 01:10:49.008435 kernel: scsi host4: ahci Mar 7 01:10:49.008469 kernel: hub 2-0:1.0: USB hub found Mar 7 01:10:49.013292 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:10:49.013319 kernel: scsi host5: ahci Mar 7 01:10:49.013474 kernel: GPT:17805311 != 160006143 Mar 7 01:10:49.013482 kernel: hub 2-0:1.0: 4 ports detected Mar 7 01:10:49.013609 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:10:49.013617 kernel: GPT:17805311 != 160006143 Mar 7 01:10:49.015522 kernel: scsi host6: ahci Mar 7 01:10:49.015551 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:10:49.017316 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Mar 7 01:10:49.017339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:10:49.017356 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Mar 7 01:10:49.019698 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:10:49.019858 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Mar 7 01:10:49.039534 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Mar 7 01:10:49.045629 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Mar 7 01:10:49.045651 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Mar 7 01:10:49.063285 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (485) Mar 7 01:10:49.066883 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:10:49.067524 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (457) Mar 7 01:10:49.075777 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:10:49.079748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:10:49.083102 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:10:49.083793 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:10:49.089444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:10:49.093467 disk-uuid[584]: Primary Header is updated. Mar 7 01:10:49.093467 disk-uuid[584]: Secondary Entries is updated. Mar 7 01:10:49.093467 disk-uuid[584]: Secondary Header is updated. Mar 7 01:10:49.100288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:10:49.106301 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:10:49.112289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:10:49.267292 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 01:10:49.365204 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:10:49.365331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:10:49.370333 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:10:49.370427 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:10:49.380326 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:10:49.386492 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:10:49.386566 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:10:49.390648 kernel: ata1.00: applying bridge limits Mar 7 01:10:49.397353 kernel: ata1.00: configured for UDMA/100 Mar 7 01:10:49.406928 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:10:49.428310 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:10:49.445828 kernel: usbcore: registered new interface driver usbhid Mar 7 01:10:49.445874 kernel: usbhid: USB HID core driver Mar 7 01:10:49.459402 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 7 01:10:49.459429 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 7 01:10:49.475190 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:10:49.475487 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:10:49.488834 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:10:50.121763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:10:50.122391 disk-uuid[585]: The operation has completed successfully. Mar 7 01:10:50.196596 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:10:50.196694 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:10:50.213391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:10:50.217792 sh[607]: Success Mar 7 01:10:50.230381 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:10:50.284670 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:10:50.287347 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:10:50.287858 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:10:50.306074 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:10:50.306107 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:50.306123 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:10:50.310686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:10:50.310707 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:10:50.320284 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:10:50.322768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:10:50.323602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:10:50.333421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:10:50.336128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:10:50.350067 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:50.350094 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:50.350104 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:10:50.358499 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:10:50.358531 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:10:50.370618 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:50.370417 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:10:50.376924 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:10:50.386395 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:10:50.428077 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:10:50.435417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:10:50.454619 systemd-networkd[789]: lo: Link UP Mar 7 01:10:50.455126 systemd-networkd[789]: lo: Gained carrier Mar 7 01:10:50.457987 systemd-networkd[789]: Enumeration completed Mar 7 01:10:50.458652 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:10:50.459239 systemd[1]: Reached target network.target - Network. Mar 7 01:10:50.460356 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:50.460360 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:50.462322 systemd-networkd[789]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:50.462326 systemd-networkd[789]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:50.464460 ignition[719]: Ignition 2.19.0 Mar 7 01:10:50.463291 systemd-networkd[789]: eth0: Link UP Mar 7 01:10:50.464465 ignition[719]: Stage: fetch-offline Mar 7 01:10:50.463295 systemd-networkd[789]: eth0: Gained carrier Mar 7 01:10:50.464495 ignition[719]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:50.463302 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:50.464504 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:50.465804 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:10:50.464566 ignition[719]: parsed url from cmdline: "" Mar 7 01:10:50.468545 systemd-networkd[789]: eth1: Link UP Mar 7 01:10:50.464569 ignition[719]: no config URL provided Mar 7 01:10:50.468549 systemd-networkd[789]: eth1: Gained carrier Mar 7 01:10:50.464574 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:10:50.468556 systemd-networkd[789]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:50.464581 ignition[719]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:10:50.464586 ignition[719]: failed to fetch config: resource requires networking Mar 7 01:10:50.464717 ignition[719]: Ignition finished successfully Mar 7 01:10:50.477425 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:10:50.486593 ignition[796]: Ignition 2.19.0 Mar 7 01:10:50.486602 ignition[796]: Stage: fetch Mar 7 01:10:50.486720 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:50.486729 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:50.486793 ignition[796]: parsed url from cmdline: "" Mar 7 01:10:50.486797 ignition[796]: no config URL provided Mar 7 01:10:50.486801 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:10:50.486809 ignition[796]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:10:50.486822 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 7 01:10:50.486952 ignition[796]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:10:50.509335 systemd-networkd[789]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 01:10:50.533331 systemd-networkd[789]: eth0: DHCPv4 address 135.181.156.177/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 01:10:50.687304 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 7 01:10:50.691973 ignition[796]: GET result: OK Mar 7 01:10:50.692093 ignition[796]: parsing config with SHA512: 0840aaadd2b90b70fdfb688b1687b9f99e09c85a5495dfc6f9ce51e41cdcec86688f92bc7388e008d9422e6148d287314d13e8f84367a1343d95835108fcfb1d Mar 7 01:10:50.700634 unknown[796]: fetched base config from "system" Mar 7 01:10:50.700661 unknown[796]: fetched base config from "system" Mar 7 01:10:50.700675 unknown[796]: fetched user config from "hetzner" Mar 7 01:10:50.703761 ignition[796]: fetch: fetch complete Mar 7 01:10:50.703780 ignition[796]: fetch: fetch passed Mar 7 01:10:50.703877 ignition[796]: Ignition finished successfully Mar 7 01:10:50.708149 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:10:50.716566 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:10:50.755220 ignition[804]: Ignition 2.19.0 Mar 7 01:10:50.755240 ignition[804]: Stage: kargs Mar 7 01:10:50.755665 ignition[804]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:50.755690 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:50.757127 ignition[804]: kargs: kargs passed Mar 7 01:10:50.757230 ignition[804]: Ignition finished successfully Mar 7 01:10:50.759996 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:10:50.768512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:10:50.790245 ignition[811]: Ignition 2.19.0 Mar 7 01:10:50.790296 ignition[811]: Stage: disks Mar 7 01:10:50.790586 ignition[811]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:50.790607 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:50.791890 ignition[811]: disks: disks passed Mar 7 01:10:50.791973 ignition[811]: Ignition finished successfully Mar 7 01:10:50.794239 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:10:50.795501 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:10:50.796092 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:10:50.796398 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:10:50.796707 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:10:50.797756 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:10:50.804504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:10:50.825118 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:10:50.829403 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:10:50.837385 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:10:50.915296 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:10:50.916153 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:10:50.917940 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:50.926429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:50.930065 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:10:50.938304 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (828) Mar 7 01:10:50.943861 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:50.943912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:50.943934 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:10:50.943966 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:10:50.948233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:10:50.948258 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:50.952122 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:10:50.961971 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:10:50.961987 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:10:50.963650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:50.972389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:10:51.004177 coreos-metadata[830]: Mar 07 01:10:51.004 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 7 01:10:51.005165 coreos-metadata[830]: Mar 07 01:10:51.005 INFO Fetch successful Mar 7 01:10:51.006868 coreos-metadata[830]: Mar 07 01:10:51.006 INFO wrote hostname ci-4081-3-6-n-5ad0d165ec to /sysroot/etc/hostname Mar 7 01:10:51.008333 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:10:51.028297 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:10:51.033483 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:10:51.038711 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:10:51.043205 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:10:51.126237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:51.139338 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:10:51.142141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:10:51.148305 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:51.167046 ignition[944]: INFO : Ignition 2.19.0 Mar 7 01:10:51.168543 ignition[944]: INFO : Stage: mount Mar 7 01:10:51.169072 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:51.169072 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:51.170871 ignition[944]: INFO : mount: mount passed Mar 7 01:10:51.170871 ignition[944]: INFO : Ignition finished successfully Mar 7 01:10:51.172022 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:10:51.172746 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:10:51.178348 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:10:51.303421 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:10:51.310028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:51.320285 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (957) Mar 7 01:10:51.320314 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:51.327431 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:51.336606 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:10:51.343365 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:10:51.343388 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:10:51.347968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:51.367741 ignition[974]: INFO : Ignition 2.19.0 Mar 7 01:10:51.367741 ignition[974]: INFO : Stage: files Mar 7 01:10:51.367741 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:51.367741 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:51.371209 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:10:51.372797 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:10:51.372797 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:10:51.377376 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:10:51.377747 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:10:51.378428 unknown[974]: wrote ssh authorized keys file for user: core Mar 7 01:10:51.378903 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:10:51.380964 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:51.381592 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:51.580488 systemd-networkd[789]: eth1: Gained IPv6LL Mar 7 01:10:51.581875 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:10:51.894424 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:51.894424 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:51.897489 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:52.028639 systemd-networkd[789]: eth0: Gained IPv6LL Mar 7 01:10:52.176467 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:10:52.291240 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:52.291240 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:52.293319 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:10:52.588759 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:10:52.884962 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:52.884962 ignition[974]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:52.889644 ignition[974]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:52.889644 ignition[974]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:52.889644 ignition[974]: INFO : files: files passed Mar 7 01:10:52.889644 ignition[974]: INFO : Ignition finished successfully Mar 7 01:10:52.891135 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:10:52.900433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:10:52.912258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:10:52.915967 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:10:52.916501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:10:52.925688 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:52.926424 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:52.929434 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:52.931868 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:52.932873 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:10:52.937460 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:10:52.981136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:10:52.981245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:10:52.982113 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:10:52.982962 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:10:52.984280 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:10:52.989487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:10:53.001856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:53.006563 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:10:53.014805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:53.015252 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:53.015678 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:10:53.016115 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:10:53.016191 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:53.017291 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:10:53.017986 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:10:53.018705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:10:53.019371 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:53.020031 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:10:53.020715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:10:53.021395 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:10:53.022104 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:10:53.022799 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:10:53.023485 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:10:53.024149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:10:53.024231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:10:53.025223 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:53.025932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:53.026588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:10:53.026671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:53.027309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:10:53.027383 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:10:53.028372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:10:53.028453 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:53.029117 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:10:53.029186 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:10:53.029811 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:10:53.029881 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:10:53.036377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:10:53.038402 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:10:53.039205 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:10:53.041451 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:53.042283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:10:53.042707 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:10:53.045165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:10:53.046326 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:10:53.048486 ignition[1027]: INFO : Ignition 2.19.0 Mar 7 01:10:53.048486 ignition[1027]: INFO : Stage: umount Mar 7 01:10:53.049541 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:53.049541 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 01:10:53.050193 ignition[1027]: INFO : umount: umount passed Mar 7 01:10:53.050193 ignition[1027]: INFO : Ignition finished successfully Mar 7 01:10:53.056458 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:10:53.056554 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:10:53.057494 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:10:53.057564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:10:53.057961 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:10:53.058002 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:10:53.058719 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:10:53.058756 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:10:53.059374 systemd[1]: Stopped target network.target - Network. Mar 7 01:10:53.061665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:10:53.061712 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:10:53.062078 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:10:53.062431 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:10:53.062856 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:53.063561 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:10:53.064238 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:10:53.064949 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:10:53.064987 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:10:53.065915 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:10:53.065960 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:10:53.067323 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:10:53.067364 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:10:53.067721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:10:53.067756 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:10:53.068201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:10:53.070416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:10:53.072715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:10:53.074283 systemd-networkd[789]: eth0: DHCPv6 lease lost Mar 7 01:10:53.077498 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:10:53.077611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:10:53.078347 systemd-networkd[789]: eth1: DHCPv6 lease lost Mar 7 01:10:53.080770 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:10:53.080879 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:10:53.082296 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:10:53.082359 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:53.087332 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:10:53.088057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:10:53.088447 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:10:53.089247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:10:53.090285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:53.091075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:10:53.091115 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:53.092101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:10:53.092145 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:53.092899 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:53.104542 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:10:53.104638 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:10:53.106125 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:10:53.106291 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:53.107390 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:10:53.107452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:53.109407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:10:53.109439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:53.110001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:10:53.110039 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:10:53.111182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:10:53.111229 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:10:53.112244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:10:53.112301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:53.123051 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:10:53.123430 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:10:53.123474 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:53.127350 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:10:53.127396 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:53.127981 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:10:53.128019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:53.129645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:53.129686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:53.130365 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:10:53.130453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:10:53.131042 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:10:53.131119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:10:53.132130 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:10:53.132858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:10:53.132912 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:53.146414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:10:53.152134 systemd[1]: Switching root. Mar 7 01:10:53.176637 systemd-journald[188]: Journal stopped Mar 7 01:10:54.185730 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 7 01:10:54.185806 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:10:54.185818 kernel: SELinux: policy capability open_perms=1 Mar 7 01:10:54.185829 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:10:54.185838 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:10:54.185846 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:10:54.185855 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:10:54.185863 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:10:54.185872 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:10:54.185885 kernel: audit: type=1403 audit(1772845853.330:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:10:54.185904 systemd[1]: Successfully loaded SELinux policy in 49.906ms. Mar 7 01:10:54.185927 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.783ms. Mar 7 01:10:54.185937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:10:54.185946 systemd[1]: Detected virtualization kvm. Mar 7 01:10:54.185955 systemd[1]: Detected architecture x86-64. Mar 7 01:10:54.185964 systemd[1]: Detected first boot. Mar 7 01:10:54.185972 systemd[1]: Hostname set to . Mar 7 01:10:54.185983 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:10:54.185992 zram_generator::config[1071]: No configuration found. Mar 7 01:10:54.186002 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:10:54.186011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:10:54.186021 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:10:54.186030 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:10:54.186040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:10:54.186049 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:10:54.186060 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:10:54.186069 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:10:54.186078 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:10:54.186086 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:10:54.186095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:10:54.186104 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:10:54.186112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:54.186121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:54.186130 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:10:54.186141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:10:54.186150 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:10:54.186160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:10:54.186169 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:10:54.186178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:54.186189 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:10:54.186202 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:10:54.186227 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:54.186242 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:10:54.186255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:54.186292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:10:54.186302 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:10:54.186311 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:10:54.186320 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:10:54.186329 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:10:54.186340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:54.186349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:54.186358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:54.186367 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:10:54.186376 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:10:54.186385 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:10:54.186394 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:10:54.186403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:54.186412 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:10:54.186423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:10:54.186434 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:10:54.186443 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:10:54.186452 systemd[1]: Reached target machines.target - Containers. Mar 7 01:10:54.186461 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:10:54.186470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:54.186478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:10:54.186488 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:10:54.186504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:54.186516 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:54.186529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:54.186537 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:10:54.186546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:54.186555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:10:54.186566 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:10:54.186577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:10:54.186586 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:10:54.186595 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:10:54.186604 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:10:54.186614 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:10:54.186623 kernel: loop: module loaded Mar 7 01:10:54.186632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:10:54.186641 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:10:54.186650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:10:54.186680 systemd-journald[1160]: Collecting audit messages is disabled. Mar 7 01:10:54.186703 kernel: fuse: init (API version 7.39) Mar 7 01:10:54.186712 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:10:54.186721 systemd-journald[1160]: Journal started Mar 7 01:10:54.186737 systemd-journald[1160]: Runtime Journal (/run/log/journal/b1215fbc4ceb443abfeabf61823370e0) is 8.0M, max 76.3M, 68.3M free. Mar 7 01:10:53.899180 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:10:53.918580 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:10:53.919007 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:10:54.189302 systemd[1]: Stopped verity-setup.service. Mar 7 01:10:54.195285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:54.199556 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:10:54.199605 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:10:54.202418 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:10:54.202893 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:10:54.203378 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:10:54.203823 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:10:54.204418 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:10:54.204987 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:10:54.205589 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:54.206170 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:10:54.206363 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:10:54.207260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:10:54.208426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:10:54.209031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:10:54.209152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:10:54.209772 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:10:54.209889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:10:54.210762 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:10:54.210888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:10:54.212830 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:54.224593 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:10:54.226485 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:10:54.230683 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:10:54.232297 kernel: ACPI: bus type drm_connector registered Mar 7 01:10:54.240353 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:10:54.245859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:10:54.246312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:10:54.246340 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:10:54.247515 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:10:54.256690 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:10:54.260349 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:10:54.260806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:54.267363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:10:54.271350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:10:54.271723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:10:54.272898 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:10:54.273320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:10:54.276381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:54.279075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:10:54.282395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:10:54.285362 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:10:54.285539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:10:54.286028 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:10:54.287862 systemd-journald[1160]: Time spent on flushing to /var/log/journal/b1215fbc4ceb443abfeabf61823370e0 is 29.862ms for 1180 entries. Mar 7 01:10:54.287862 systemd-journald[1160]: System Journal (/var/log/journal/b1215fbc4ceb443abfeabf61823370e0) is 8.0M, max 584.8M, 576.8M free. Mar 7 01:10:54.329440 systemd-journald[1160]: Received client request to flush runtime journal. Mar 7 01:10:54.329468 kernel: loop0: detected capacity change from 0 to 8 Mar 7 01:10:54.287422 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:10:54.291574 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:10:54.333067 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:10:54.357310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:10:54.350128 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:10:54.351961 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:10:54.363366 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:10:54.386866 kernel: loop1: detected capacity change from 0 to 219192 Mar 7 01:10:54.396654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:54.402948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:10:54.407025 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:10:54.419150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:54.427373 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:10:54.428048 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 7 01:10:54.428059 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 7 01:10:54.437600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:54.444686 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:10:54.446478 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:10:54.448357 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:10:54.494661 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:10:54.505452 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:10:54.503238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:10:54.519893 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 7 01:10:54.520185 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 7 01:10:54.525651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:54.551294 kernel: loop4: detected capacity change from 0 to 8 Mar 7 01:10:54.555283 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:10:54.574289 kernel: loop6: detected capacity change from 0 to 142488 Mar 7 01:10:54.598292 kernel: loop7: detected capacity change from 0 to 140768 Mar 7 01:10:54.614871 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 7 01:10:54.615917 (sd-merge)[1218]: Merged extensions into '/usr'. Mar 7 01:10:54.621351 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:10:54.621451 systemd[1]: Reloading... Mar 7 01:10:54.707550 zram_generator::config[1244]: No configuration found. Mar 7 01:10:54.771371 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:10:54.834475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:54.870248 systemd[1]: Reloading finished in 248 ms. Mar 7 01:10:54.902180 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:10:54.903017 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:10:54.914397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:54.915072 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:10:54.926403 systemd[1]: Starting ensure-sysext.service... Mar 7 01:10:54.931073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:10:54.941407 systemd[1]: Reloading requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:10:54.941492 systemd[1]: Reloading... Mar 7 01:10:54.968157 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:10:54.968474 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:10:54.972196 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:10:54.972508 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Mar 7 01:10:54.972615 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Mar 7 01:10:54.972998 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Mar 7 01:10:54.975760 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:54.975824 systemd-tmpfiles[1290]: Skipping /boot Mar 7 01:10:54.988831 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:54.988932 systemd-tmpfiles[1290]: Skipping /boot Mar 7 01:10:55.015025 zram_generator::config[1315]: No configuration found. Mar 7 01:10:55.131364 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1325) Mar 7 01:10:55.186097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:55.197475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:10:55.214314 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:10:55.237325 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:10:55.244303 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 7 01:10:55.251029 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:10:55.251202 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:10:55.258344 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:10:55.265434 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:10:55.296278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:10:55.296666 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:10:55.297043 systemd[1]: Reloading finished in 355 ms. Mar 7 01:10:55.305309 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:10:55.311326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:55.312686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:55.322293 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 7 01:10:55.323571 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:10:55.328326 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 7 01:10:55.328519 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 7 01:10:55.328540 kernel: [drm] features: -context_init Mar 7 01:10:55.328550 kernel: [drm] number of scanouts: 1 Mar 7 01:10:55.331937 kernel: [drm] number of cap sets: 0 Mar 7 01:10:55.335548 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 7 01:10:55.343650 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 7 01:10:55.343704 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 01:10:55.349295 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 7 01:10:55.352328 systemd[1]: Finished ensure-sysext.service. Mar 7 01:10:55.361363 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 7 01:10:55.364717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:55.368409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:10:55.373467 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:10:55.375177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:55.376483 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:55.379472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:55.381821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:55.385115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:55.385447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:55.388420 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:10:55.390471 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:10:55.401503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:10:55.403327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:10:55.411186 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:10:55.413390 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:10:55.419392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:55.419454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:55.420176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:10:55.420454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:10:55.420791 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:10:55.420920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:10:55.421347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:10:55.421507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:10:55.423417 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:10:55.423545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:10:55.429298 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:10:55.433903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:10:55.433967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:10:55.440428 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:10:55.448313 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:10:55.458381 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:10:55.466416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:10:55.471784 augenrules[1442]: No rules Mar 7 01:10:55.474155 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:10:55.476488 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:10:55.479398 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:10:55.487064 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:10:55.501115 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:10:55.503261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:10:55.508945 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:10:55.513278 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:10:55.540931 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:10:55.541610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:55.551428 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:10:55.554011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:55.558050 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:10:55.582632 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:10:55.599708 systemd-networkd[1419]: lo: Link UP Mar 7 01:10:55.599718 systemd-networkd[1419]: lo: Gained carrier Mar 7 01:10:55.602036 systemd-networkd[1419]: Enumeration completed Mar 7 01:10:55.602127 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:10:55.602425 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:55.602429 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:55.603780 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:55.603784 systemd-networkd[1419]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:55.604397 systemd-networkd[1419]: eth0: Link UP Mar 7 01:10:55.604401 systemd-networkd[1419]: eth0: Gained carrier Mar 7 01:10:55.604412 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:55.608562 systemd-networkd[1419]: eth1: Link UP Mar 7 01:10:55.608573 systemd-networkd[1419]: eth1: Gained carrier Mar 7 01:10:55.608586 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:55.612349 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:10:55.616744 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:10:55.617197 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:10:55.621952 systemd-resolved[1420]: Positive Trust Anchors: Mar 7 01:10:55.621968 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:10:55.621991 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:10:55.625709 systemd-resolved[1420]: Using system hostname 'ci-4081-3-6-n-5ad0d165ec'. Mar 7 01:10:55.627334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:10:55.627830 systemd[1]: Reached target network.target - Network. Mar 7 01:10:55.628189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:55.628630 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:10:55.629048 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:10:55.631251 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:10:55.631783 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:10:55.632198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:10:55.632552 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:10:55.632874 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:10:55.632895 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:10:55.633206 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:10:55.635032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:10:55.636987 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:10:55.642870 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:10:55.643326 systemd-networkd[1419]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 01:10:55.643799 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:10:55.644050 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Mar 7 01:10:55.644201 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:10:55.646507 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:10:55.646877 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:10:55.646908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:10:55.654706 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:10:55.657258 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:10:55.660796 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:10:55.669314 systemd-networkd[1419]: eth0: DHCPv4 address 135.181.156.177/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 01:10:55.670012 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Mar 7 01:10:55.671365 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:10:55.675394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:10:55.678910 jq[1476]: false Mar 7 01:10:55.677671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:10:55.679340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:10:55.684027 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:10:55.689468 coreos-metadata[1472]: Mar 07 01:10:55.689 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 7 01:10:55.693580 coreos-metadata[1472]: Mar 07 01:10:55.689 INFO Fetch successful Mar 7 01:10:55.693580 coreos-metadata[1472]: Mar 07 01:10:55.689 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 7 01:10:55.693580 coreos-metadata[1472]: Mar 07 01:10:55.690 INFO Fetch successful Mar 7 01:10:55.693512 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 7 01:10:55.697399 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:10:55.699208 extend-filesystems[1477]: Found loop4 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found loop5 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found loop6 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found loop7 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda1 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda2 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda3 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found usr Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda4 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda6 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda7 Mar 7 01:10:55.700714 extend-filesystems[1477]: Found sda9 Mar 7 01:10:55.700714 extend-filesystems[1477]: Checking size of /dev/sda9 Mar 7 01:10:55.705551 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:10:55.725363 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:10:55.727833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:10:55.728309 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:10:55.729973 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:10:55.733335 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:10:55.746657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:10:55.746818 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:10:55.747652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:10:55.747803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:10:55.753998 jq[1493]: true Mar 7 01:10:55.757428 extend-filesystems[1477]: Resized partition /dev/sda9 Mar 7 01:10:55.769721 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:10:55.785291 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Mar 7 01:10:55.786714 dbus-daemon[1473]: [system] SELinux support is enabled Mar 7 01:10:55.790181 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:10:55.793982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:10:55.794015 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:10:55.797688 update_engine[1490]: I20260307 01:10:55.797621 1490 main.cc:92] Flatcar Update Engine starting Mar 7 01:10:55.798121 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:10:55.798143 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:10:55.802682 jq[1502]: true Mar 7 01:10:55.804298 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:10:55.816648 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:10:55.818847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:10:55.820091 systemd-logind[1487]: New seat seat0. Mar 7 01:10:55.822198 update_engine[1490]: I20260307 01:10:55.822012 1490 update_check_scheduler.cc:74] Next update check in 4m16s Mar 7 01:10:55.822887 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:10:55.822911 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:10:55.823070 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:10:55.836514 tar[1496]: linux-amd64/LICENSE Mar 7 01:10:55.837800 tar[1496]: linux-amd64/helm Mar 7 01:10:55.838410 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:10:55.838584 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:10:55.857588 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:10:55.859025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:10:55.878284 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1372) Mar 7 01:10:55.954741 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:10:55.955417 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:10:55.967998 systemd[1]: Starting sshkeys.service... Mar 7 01:10:55.998191 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:10:56.008074 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:10:56.028430 containerd[1515]: time="2026-03-07T01:10:56.028344362Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:10:56.056644 containerd[1515]: time="2026-03-07T01:10:56.056618484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.056778 coreos-metadata[1549]: Mar 07 01:10:56.056 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 7 01:10:56.059030 coreos-metadata[1549]: Mar 07 01:10:56.058 INFO Fetch successful Mar 7 01:10:56.061871 containerd[1515]: time="2026-03-07T01:10:56.061832886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:56.065809 containerd[1515]: time="2026-03-07T01:10:56.065178277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:10:56.065809 containerd[1515]: time="2026-03-07T01:10:56.065201367Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.067852799Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.067871079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.067923309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.067932949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.068085059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.068095099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.068104539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.068111439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068349 containerd[1515]: time="2026-03-07T01:10:56.068173369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.068817 unknown[1549]: wrote ssh authorized keys file for user: core Mar 7 01:10:56.072671 containerd[1515]: time="2026-03-07T01:10:56.072576620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:56.072892 containerd[1515]: time="2026-03-07T01:10:56.072878231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:56.072973 containerd[1515]: time="2026-03-07T01:10:56.072963591Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:10:56.073108 containerd[1515]: time="2026-03-07T01:10:56.073097131Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:10:56.073205 containerd[1515]: time="2026-03-07T01:10:56.073195631Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:10:56.079382 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:10:56.098141 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Mar 7 01:10:56.100326 containerd[1515]: time="2026-03-07T01:10:56.100301492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:10:56.100418 containerd[1515]: time="2026-03-07T01:10:56.100407502Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:10:56.100488 containerd[1515]: time="2026-03-07T01:10:56.100478702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:10:56.100559 containerd[1515]: time="2026-03-07T01:10:56.100534472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100586092Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100708352Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100862932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100946942Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100970442Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100982502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.100992462Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101001962Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101011422Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101021332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101031092Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101041202Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101049872Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102436 containerd[1515]: time="2026-03-07T01:10:56.101058552Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101073172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101082872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101092072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101101312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101114572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101124152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101133042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101147912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101156452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101166562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101174922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101183022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101191222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101201902Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:10:56.102630 containerd[1515]: time="2026-03-07T01:10:56.101215562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101223692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101242932Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101299502Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101315292Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101322672Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101330842Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101338122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101350072Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101363872Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:10:56.102816 containerd[1515]: time="2026-03-07T01:10:56.101371192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.101569183Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.101620813Z" level=info msg="Connect containerd service" Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.101658133Z" level=info msg="using legacy CRI server" Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.101663413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.101747243Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:10:56.102947 containerd[1515]: time="2026-03-07T01:10:56.102168393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:10:56.106388 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:10:56.106388 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 7 01:10:56.106388 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Mar 7 01:10:56.105683 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:10:56.111192 extend-filesystems[1477]: Resized filesystem in /dev/sda9 Mar 7 01:10:56.111192 extend-filesystems[1477]: Found sr0 Mar 7 01:10:56.105865 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.111982347Z" level=info msg="Start subscribing containerd event" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.114098388Z" level=info msg="Start recovering state" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.114184998Z" level=info msg="Start event monitor" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.114291138Z" level=info msg="Start snapshots syncer" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.114301598Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.114310208Z" level=info msg="Start streaming server" Mar 7 01:10:56.116690 containerd[1515]: time="2026-03-07T01:10:56.112686927Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:10:56.116821 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:10:56.113037 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:10:56.121427 containerd[1515]: time="2026-03-07T01:10:56.120521630Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:10:56.121427 containerd[1515]: time="2026-03-07T01:10:56.120892131Z" level=info msg="containerd successfully booted in 0.093355s" Mar 7 01:10:56.121965 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:10:56.126148 systemd[1]: Finished sshkeys.service. Mar 7 01:10:56.171770 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:10:56.191743 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:10:56.201411 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:10:56.208122 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:10:56.208357 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:10:56.217167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:10:56.228916 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:10:56.236850 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:10:56.245576 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:10:56.246759 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:10:56.418477 tar[1496]: linux-amd64/README.md Mar 7 01:10:56.428012 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:10:56.764731 systemd-networkd[1419]: eth1: Gained IPv6LL Mar 7 01:10:56.765740 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Mar 7 01:10:56.771308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:10:56.773874 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:10:56.783613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:56.795404 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:10:56.837647 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:10:57.532490 systemd-networkd[1419]: eth0: Gained IPv6LL Mar 7 01:10:57.533005 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Mar 7 01:10:57.687559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:57.687673 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:10:57.690480 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:10:57.694137 systemd[1]: Startup finished in 1.500s (kernel) + 5.576s (initrd) + 4.412s (userspace) = 11.489s. Mar 7 01:10:58.218909 kubelet[1602]: E0307 01:10:58.218784 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:10:58.223659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:10:58.224026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:10:59.825250 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:10:59.831733 systemd[1]: Started sshd@0-135.181.156.177:22-4.153.228.146:50812.service - OpenSSH per-connection server daemon (4.153.228.146:50812). Mar 7 01:11:00.591342 sshd[1613]: Accepted publickey for core from 4.153.228.146 port 50812 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:00.594646 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:00.611941 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:11:00.619032 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:11:00.622897 systemd-logind[1487]: New session 1 of user core. Mar 7 01:11:00.651866 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:11:00.661182 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:11:00.683444 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:11:00.803743 systemd[1617]: Queued start job for default target default.target. Mar 7 01:11:00.814371 systemd[1617]: Created slice app.slice - User Application Slice. Mar 7 01:11:00.814394 systemd[1617]: Reached target paths.target - Paths. Mar 7 01:11:00.814406 systemd[1617]: Reached target timers.target - Timers. Mar 7 01:11:00.815780 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:11:00.838432 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:11:00.838534 systemd[1617]: Reached target sockets.target - Sockets. Mar 7 01:11:00.838546 systemd[1617]: Reached target basic.target - Basic System. Mar 7 01:11:00.838579 systemd[1617]: Reached target default.target - Main User Target. Mar 7 01:11:00.838610 systemd[1617]: Startup finished in 142ms. Mar 7 01:11:00.838891 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:11:00.849409 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:11:01.387510 systemd[1]: Started sshd@1-135.181.156.177:22-4.153.228.146:50826.service - OpenSSH per-connection server daemon (4.153.228.146:50826). Mar 7 01:11:02.126170 sshd[1628]: Accepted publickey for core from 4.153.228.146 port 50826 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:02.130086 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:02.138829 systemd-logind[1487]: New session 2 of user core. Mar 7 01:11:02.145531 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:11:02.649675 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:02.655773 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:11:02.656623 systemd[1]: sshd@1-135.181.156.177:22-4.153.228.146:50826.service: Deactivated successfully. Mar 7 01:11:02.660198 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:11:02.661877 systemd-logind[1487]: Removed session 2. Mar 7 01:11:02.788746 systemd[1]: Started sshd@2-135.181.156.177:22-4.153.228.146:50840.service - OpenSSH per-connection server daemon (4.153.228.146:50840). Mar 7 01:11:03.550090 sshd[1635]: Accepted publickey for core from 4.153.228.146 port 50840 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:03.551413 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:03.556518 systemd-logind[1487]: New session 3 of user core. Mar 7 01:11:03.564429 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:11:04.070804 sshd[1635]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:04.077111 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:11:04.078739 systemd[1]: sshd@2-135.181.156.177:22-4.153.228.146:50840.service: Deactivated successfully. Mar 7 01:11:04.082064 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:11:04.083518 systemd-logind[1487]: Removed session 3. Mar 7 01:11:04.210675 systemd[1]: Started sshd@3-135.181.156.177:22-4.153.228.146:50852.service - OpenSSH per-connection server daemon (4.153.228.146:50852). Mar 7 01:11:04.963526 sshd[1642]: Accepted publickey for core from 4.153.228.146 port 50852 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:04.966512 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:04.971752 systemd-logind[1487]: New session 4 of user core. Mar 7 01:11:04.978618 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:11:05.492384 sshd[1642]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:05.498051 systemd[1]: sshd@3-135.181.156.177:22-4.153.228.146:50852.service: Deactivated successfully. Mar 7 01:11:05.501788 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:11:05.504866 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:11:05.506963 systemd-logind[1487]: Removed session 4. Mar 7 01:11:05.629676 systemd[1]: Started sshd@4-135.181.156.177:22-4.153.228.146:50862.service - OpenSSH per-connection server daemon (4.153.228.146:50862). Mar 7 01:11:06.380884 sshd[1649]: Accepted publickey for core from 4.153.228.146 port 50862 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:06.383691 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:06.392390 systemd-logind[1487]: New session 5 of user core. Mar 7 01:11:06.407532 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:11:06.799025 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:11:06.800119 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:06.825353 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:06.945430 sshd[1649]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:06.950668 systemd[1]: sshd@4-135.181.156.177:22-4.153.228.146:50862.service: Deactivated successfully. Mar 7 01:11:06.954898 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:11:06.957770 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:11:06.959660 systemd-logind[1487]: Removed session 5. Mar 7 01:11:07.085344 systemd[1]: Started sshd@5-135.181.156.177:22-4.153.228.146:50870.service - OpenSSH per-connection server daemon (4.153.228.146:50870). Mar 7 01:11:07.839111 sshd[1657]: Accepted publickey for core from 4.153.228.146 port 50870 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:07.840440 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:07.848406 systemd-logind[1487]: New session 6 of user core. Mar 7 01:11:07.866510 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:11:08.246381 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:11:08.247185 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:08.249149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:11:08.255228 sudo[1661]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:08.257518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:08.267994 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:11:08.268827 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:08.292755 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:11:08.307144 auditctl[1667]: No rules Mar 7 01:11:08.308838 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:11:08.309165 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:11:08.321508 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:11:08.347432 augenrules[1685]: No rules Mar 7 01:11:08.348045 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:11:08.349383 sudo[1660]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:08.412569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:08.416456 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:11:08.443916 kubelet[1695]: E0307 01:11:08.443848 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:11:08.447859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:11:08.448031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:11:08.468557 sshd[1657]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:08.473770 systemd[1]: sshd@5-135.181.156.177:22-4.153.228.146:50870.service: Deactivated successfully. Mar 7 01:11:08.475703 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:11:08.476816 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:11:08.477949 systemd-logind[1487]: Removed session 6. Mar 7 01:11:08.601642 systemd[1]: Started sshd@6-135.181.156.177:22-4.153.228.146:50872.service - OpenSSH per-connection server daemon (4.153.228.146:50872). Mar 7 01:11:09.369339 sshd[1706]: Accepted publickey for core from 4.153.228.146 port 50872 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:11:09.371732 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:09.379395 systemd-logind[1487]: New session 7 of user core. Mar 7 01:11:09.387570 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:11:09.782839 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:11:09.783701 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:10.133458 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:11:10.145987 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:11:10.456427 dockerd[1725]: time="2026-03-07T01:11:10.455413541Z" level=info msg="Starting up" Mar 7 01:11:10.567042 dockerd[1725]: time="2026-03-07T01:11:10.566482447Z" level=info msg="Loading containers: start." Mar 7 01:11:10.701314 kernel: Initializing XFRM netlink socket Mar 7 01:11:10.724051 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Mar 7 01:11:10.763430 systemd-timesyncd[1422]: Contacted time server 85.215.166.214:123 (2.flatcar.pool.ntp.org). Mar 7 01:11:10.763615 systemd-timesyncd[1422]: Initial clock synchronization to Sat 2026-03-07 01:11:10.669485 UTC. Mar 7 01:11:10.793115 systemd-networkd[1419]: docker0: Link UP Mar 7 01:11:10.810539 dockerd[1725]: time="2026-03-07T01:11:10.810500699Z" level=info msg="Loading containers: done." Mar 7 01:11:10.829223 dockerd[1725]: time="2026-03-07T01:11:10.829184256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:11:10.829462 dockerd[1725]: time="2026-03-07T01:11:10.829250846Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:11:10.829462 dockerd[1725]: time="2026-03-07T01:11:10.829359727Z" level=info msg="Daemon has completed initialization" Mar 7 01:11:10.863775 dockerd[1725]: time="2026-03-07T01:11:10.863706331Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:11:10.864211 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:11:11.321992 containerd[1515]: time="2026-03-07T01:11:11.321699819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:11:11.921412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117720924.mount: Deactivated successfully. Mar 7 01:11:12.974105 containerd[1515]: time="2026-03-07T01:11:12.974050231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.975154 containerd[1515]: time="2026-03-07T01:11:12.974995686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074597" Mar 7 01:11:12.976068 containerd[1515]: time="2026-03-07T01:11:12.976025519Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.980645 containerd[1515]: time="2026-03-07T01:11:12.980375561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:12.981764 containerd[1515]: time="2026-03-07T01:11:12.981734685Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.659993112s" Mar 7 01:11:12.981795 containerd[1515]: time="2026-03-07T01:11:12.981762537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:11:12.982309 containerd[1515]: time="2026-03-07T01:11:12.982253714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:11:14.084956 containerd[1515]: time="2026-03-07T01:11:14.084911595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:14.086145 containerd[1515]: time="2026-03-07T01:11:14.086031682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165845" Mar 7 01:11:14.087284 containerd[1515]: time="2026-03-07T01:11:14.087091942Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:14.089208 containerd[1515]: time="2026-03-07T01:11:14.089172952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:14.089961 containerd[1515]: time="2026-03-07T01:11:14.089874230Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.107503771s" Mar 7 01:11:14.089961 containerd[1515]: time="2026-03-07T01:11:14.089896812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:11:14.090425 containerd[1515]: time="2026-03-07T01:11:14.090395450Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:11:15.003880 containerd[1515]: time="2026-03-07T01:11:15.003837185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:15.004733 containerd[1515]: time="2026-03-07T01:11:15.004650168Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729846" Mar 7 01:11:15.005848 containerd[1515]: time="2026-03-07T01:11:15.005564352Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:15.007996 containerd[1515]: time="2026-03-07T01:11:15.007747458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:15.008743 containerd[1515]: time="2026-03-07T01:11:15.008483671Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 918.052395ms" Mar 7 01:11:15.008743 containerd[1515]: time="2026-03-07T01:11:15.008510357Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:11:15.008893 containerd[1515]: time="2026-03-07T01:11:15.008857003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:11:15.959077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216343017.mount: Deactivated successfully. Mar 7 01:11:16.174296 containerd[1515]: time="2026-03-07T01:11:16.174224379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.175321 containerd[1515]: time="2026-03-07T01:11:16.175284844Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861798" Mar 7 01:11:16.176186 containerd[1515]: time="2026-03-07T01:11:16.176148248Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.178247 containerd[1515]: time="2026-03-07T01:11:16.178208116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:16.179092 containerd[1515]: time="2026-03-07T01:11:16.178799017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.169913532s" Mar 7 01:11:16.179092 containerd[1515]: time="2026-03-07T01:11:16.178825964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:11:16.179317 containerd[1515]: time="2026-03-07T01:11:16.179297386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:11:16.730991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921280973.mount: Deactivated successfully. Mar 7 01:11:17.527033 containerd[1515]: time="2026-03-07T01:11:17.526988624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:17.527983 containerd[1515]: time="2026-03-07T01:11:17.527773021Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Mar 7 01:11:17.529745 containerd[1515]: time="2026-03-07T01:11:17.528665836Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:17.532300 containerd[1515]: time="2026-03-07T01:11:17.530940433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:17.532300 containerd[1515]: time="2026-03-07T01:11:17.532036535Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.3527183s" Mar 7 01:11:17.532300 containerd[1515]: time="2026-03-07T01:11:17.532058956Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:11:17.532883 containerd[1515]: time="2026-03-07T01:11:17.532747410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:11:18.012545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183364869.mount: Deactivated successfully. Mar 7 01:11:18.023885 containerd[1515]: time="2026-03-07T01:11:18.023819843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:18.025330 containerd[1515]: time="2026-03-07T01:11:18.025112841Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Mar 7 01:11:18.026666 containerd[1515]: time="2026-03-07T01:11:18.026596431Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:18.031309 containerd[1515]: time="2026-03-07T01:11:18.030800347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:18.033513 containerd[1515]: time="2026-03-07T01:11:18.032121506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 499.342912ms" Mar 7 01:11:18.033513 containerd[1515]: time="2026-03-07T01:11:18.032181489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:11:18.033801 containerd[1515]: time="2026-03-07T01:11:18.033749737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:11:18.532400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:11:18.538625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:18.583016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495974701.mount: Deactivated successfully. Mar 7 01:11:18.721451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:18.729604 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:11:18.759726 kubelet[2012]: E0307 01:11:18.758208 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:11:18.761782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:11:18.761959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:11:19.271960 containerd[1515]: time="2026-03-07T01:11:19.271918593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:19.272982 containerd[1515]: time="2026-03-07T01:11:19.272800253Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860762" Mar 7 01:11:19.273996 containerd[1515]: time="2026-03-07T01:11:19.273711364Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:19.279525 containerd[1515]: time="2026-03-07T01:11:19.279154074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:19.280298 containerd[1515]: time="2026-03-07T01:11:19.279740237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.245950222s" Mar 7 01:11:19.280298 containerd[1515]: time="2026-03-07T01:11:19.279763463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:11:21.881608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:21.890515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:21.933760 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)... Mar 7 01:11:21.933918 systemd[1]: Reloading... Mar 7 01:11:22.030334 zram_generator::config[2136]: No configuration found. Mar 7 01:11:22.109167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:11:22.169391 systemd[1]: Reloading finished in 234 ms. Mar 7 01:11:22.210690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:22.214374 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:11:22.222711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:22.224526 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:11:22.224910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:22.230948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:22.341880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:22.347071 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:11:22.375254 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:11:22.375254 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:11:22.375254 kubelet[2197]: I0307 01:11:22.374525 2197 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:11:22.665291 kubelet[2197]: I0307 01:11:22.665237 2197 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:11:22.665291 kubelet[2197]: I0307 01:11:22.665259 2197 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:11:22.667301 kubelet[2197]: I0307 01:11:22.667274 2197 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:11:22.667301 kubelet[2197]: I0307 01:11:22.667290 2197 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:11:22.667463 kubelet[2197]: I0307 01:11:22.667441 2197 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:11:22.674288 kubelet[2197]: I0307 01:11:22.673113 2197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:11:22.674288 kubelet[2197]: E0307 01:11:22.673429 2197 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://135.181.156.177:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:11:22.679627 kubelet[2197]: E0307 01:11:22.679582 2197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:11:22.679627 kubelet[2197]: I0307 01:11:22.679617 2197 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:11:22.682393 kubelet[2197]: I0307 01:11:22.682350 2197 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:11:22.683797 kubelet[2197]: I0307 01:11:22.683755 2197 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:11:22.683886 kubelet[2197]: I0307 01:11:22.683777 2197 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-5ad0d165ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:11:22.683886 kubelet[2197]: I0307 01:11:22.683875 2197 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:11:22.683886 kubelet[2197]: I0307 01:11:22.683882 2197 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:11:22.684110 kubelet[2197]: I0307 01:11:22.683961 2197 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:11:22.686213 kubelet[2197]: I0307 01:11:22.686182 2197 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:11:22.686377 kubelet[2197]: I0307 01:11:22.686352 2197 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:11:22.686377 kubelet[2197]: I0307 01:11:22.686364 2197 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:11:22.686465 kubelet[2197]: I0307 01:11:22.686383 2197 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:11:22.686465 kubelet[2197]: I0307 01:11:22.686395 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:11:22.688655 kubelet[2197]: E0307 01:11:22.687349 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.156.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-5ad0d165ec&limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:11:22.689642 kubelet[2197]: I0307 01:11:22.689605 2197 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:11:22.690860 kubelet[2197]: I0307 01:11:22.690822 2197 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:11:22.691059 kubelet[2197]: I0307 01:11:22.691032 2197 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:11:22.691313 kubelet[2197]: W0307 01:11:22.691250 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:11:22.692140 kubelet[2197]: E0307 01:11:22.692105 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.156.177:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:11:22.695925 kubelet[2197]: I0307 01:11:22.695902 2197 server.go:1262] "Started kubelet" Mar 7 01:11:22.698578 kubelet[2197]: I0307 01:11:22.698552 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:11:22.700685 kubelet[2197]: E0307 01:11:22.698821 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.156.177:6443/api/v1/namespaces/default/events\": dial tcp 135.181.156.177:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-5ad0d165ec.189a69eff7996716 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-5ad0d165ec,UID:ci-4081-3-6-n-5ad0d165ec,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-5ad0d165ec,},FirstTimestamp:2026-03-07 01:11:22.695837462 +0000 UTC m=+0.344647271,LastTimestamp:2026-03-07 01:11:22.695837462 +0000 UTC m=+0.344647271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-5ad0d165ec,}" Mar 7 01:11:22.700913 kubelet[2197]: I0307 01:11:22.700883 2197 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:11:22.702092 kubelet[2197]: I0307 01:11:22.702066 2197 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:11:22.705174 kubelet[2197]: I0307 01:11:22.705137 2197 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:11:22.705174 kubelet[2197]: I0307 01:11:22.705177 2197 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:11:22.705327 kubelet[2197]: I0307 01:11:22.705314 2197 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:11:22.706163 kubelet[2197]: I0307 01:11:22.705440 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:11:22.707999 kubelet[2197]: I0307 01:11:22.707973 2197 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:11:22.708080 kubelet[2197]: I0307 01:11:22.708025 2197 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:11:22.708080 kubelet[2197]: I0307 01:11:22.708052 2197 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:11:22.708539 kubelet[2197]: E0307 01:11:22.708258 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.156.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:11:22.709700 kubelet[2197]: E0307 01:11:22.709676 2197 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:11:22.709786 kubelet[2197]: I0307 01:11:22.709733 2197 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:11:22.709786 kubelet[2197]: I0307 01:11:22.709739 2197 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:11:22.709786 kubelet[2197]: I0307 01:11:22.709777 2197 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:11:22.712753 kubelet[2197]: E0307 01:11:22.712704 2197 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" Mar 7 01:11:22.724309 kubelet[2197]: E0307 01:11:22.723161 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.156.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5ad0d165ec?timeout=10s\": dial tcp 135.181.156.177:6443: connect: connection refused" interval="200ms" Mar 7 01:11:22.730965 kubelet[2197]: I0307 01:11:22.730934 2197 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:11:22.730965 kubelet[2197]: I0307 01:11:22.730945 2197 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:11:22.730965 kubelet[2197]: I0307 01:11:22.730957 2197 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:11:22.733508 kubelet[2197]: I0307 01:11:22.733486 2197 policy_none.go:49] "None policy: Start" Mar 7 01:11:22.733508 kubelet[2197]: I0307 01:11:22.733501 2197 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:11:22.733508 kubelet[2197]: I0307 01:11:22.733511 2197 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:11:22.736346 kubelet[2197]: I0307 01:11:22.736310 2197 policy_none.go:47] "Start" Mar 7 01:11:22.744382 kubelet[2197]: I0307 01:11:22.743301 2197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:11:22.746390 kubelet[2197]: I0307 01:11:22.745430 2197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:11:22.746390 kubelet[2197]: I0307 01:11:22.745445 2197 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:11:22.746390 kubelet[2197]: I0307 01:11:22.745460 2197 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:11:22.746390 kubelet[2197]: E0307 01:11:22.745488 2197 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:11:22.745620 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:11:22.750562 kubelet[2197]: E0307 01:11:22.750545 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.156.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:11:22.760439 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:11:22.763540 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:11:22.774029 kubelet[2197]: E0307 01:11:22.774005 2197 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:11:22.774161 kubelet[2197]: I0307 01:11:22.774141 2197 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:11:22.774192 kubelet[2197]: I0307 01:11:22.774165 2197 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:11:22.774688 kubelet[2197]: I0307 01:11:22.774634 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:11:22.775550 kubelet[2197]: E0307 01:11:22.775506 2197 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:11:22.775601 kubelet[2197]: E0307 01:11:22.775557 2197 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-5ad0d165ec\" not found" Mar 7 01:11:22.862231 systemd[1]: Created slice kubepods-burstable-pod33f3376ea28bccb77a235836b1216b3e.slice - libcontainer container kubepods-burstable-pod33f3376ea28bccb77a235836b1216b3e.slice. Mar 7 01:11:22.878841 kubelet[2197]: I0307 01:11:22.877515 2197 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:22.878841 kubelet[2197]: E0307 01:11:22.878239 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.156.177:6443/api/v1/nodes\": dial tcp 135.181.156.177:6443: connect: connection refused" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:22.879932 kubelet[2197]: E0307 01:11:22.879572 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:22.884744 systemd[1]: Created slice kubepods-burstable-pod965b4845e0a5765f19d354d6dcae0cd1.slice - libcontainer container kubepods-burstable-pod965b4845e0a5765f19d354d6dcae0cd1.slice. Mar 7 01:11:22.896170 kubelet[2197]: E0307 01:11:22.894596 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:22.902762 systemd[1]: Created slice kubepods-burstable-pod40d26a8ac4a313ebd0183360fc8d3c4d.slice - libcontainer container kubepods-burstable-pod40d26a8ac4a313ebd0183360fc8d3c4d.slice. Mar 7 01:11:22.906181 kubelet[2197]: E0307 01:11:22.905878 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:22.924544 kubelet[2197]: E0307 01:11:22.924345 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.156.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5ad0d165ec?timeout=10s\": dial tcp 135.181.156.177:6443: connect: connection refused" interval="400ms" Mar 7 01:11:23.010230 kubelet[2197]: I0307 01:11:23.009931 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010230 kubelet[2197]: I0307 01:11:23.010004 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010230 kubelet[2197]: I0307 01:11:23.010082 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010230 kubelet[2197]: I0307 01:11:23.010110 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40d26a8ac4a313ebd0183360fc8d3c4d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-5ad0d165ec\" (UID: \"40d26a8ac4a313ebd0183360fc8d3c4d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010230 kubelet[2197]: I0307 01:11:23.010129 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010498 kubelet[2197]: I0307 01:11:23.010146 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010498 kubelet[2197]: I0307 01:11:23.010164 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010498 kubelet[2197]: I0307 01:11:23.010179 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.010498 kubelet[2197]: I0307 01:11:23.010214 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.082217 kubelet[2197]: I0307 01:11:23.082001 2197 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.082428 kubelet[2197]: E0307 01:11:23.082392 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.156.177:6443/api/v1/nodes\": dial tcp 135.181.156.177:6443: connect: connection refused" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.185013 containerd[1515]: time="2026-03-07T01:11:23.184849889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-5ad0d165ec,Uid:33f3376ea28bccb77a235836b1216b3e,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:23.201033 containerd[1515]: time="2026-03-07T01:11:23.200968000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-5ad0d165ec,Uid:965b4845e0a5765f19d354d6dcae0cd1,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:23.209772 containerd[1515]: time="2026-03-07T01:11:23.209722841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-5ad0d165ec,Uid:40d26a8ac4a313ebd0183360fc8d3c4d,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:23.326145 kubelet[2197]: E0307 01:11:23.326003 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.156.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5ad0d165ec?timeout=10s\": dial tcp 135.181.156.177:6443: connect: connection refused" interval="800ms" Mar 7 01:11:23.486133 kubelet[2197]: I0307 01:11:23.485949 2197 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.486998 kubelet[2197]: E0307 01:11:23.486483 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.156.177:6443/api/v1/nodes\": dial tcp 135.181.156.177:6443: connect: connection refused" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:23.688609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427432036.mount: Deactivated successfully. Mar 7 01:11:23.697194 containerd[1515]: time="2026-03-07T01:11:23.697103562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:11:23.698555 containerd[1515]: time="2026-03-07T01:11:23.698492619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:11:23.699879 containerd[1515]: time="2026-03-07T01:11:23.699810994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:11:23.700391 containerd[1515]: time="2026-03-07T01:11:23.700331644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 7 01:11:23.702055 containerd[1515]: time="2026-03-07T01:11:23.702004960Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:11:23.703002 containerd[1515]: time="2026-03-07T01:11:23.702943851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:11:23.704330 containerd[1515]: time="2026-03-07T01:11:23.703741846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:11:23.706017 containerd[1515]: time="2026-03-07T01:11:23.705886548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:11:23.707286 containerd[1515]: time="2026-03-07T01:11:23.706764862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.814222ms" Mar 7 01:11:23.708934 containerd[1515]: time="2026-03-07T01:11:23.708888543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.079268ms" Mar 7 01:11:23.710870 containerd[1515]: time="2026-03-07T01:11:23.710832869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.768237ms" Mar 7 01:11:23.765960 kubelet[2197]: E0307 01:11:23.765877 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.156.177:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:11:23.811703 containerd[1515]: time="2026-03-07T01:11:23.811501162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:23.811703 containerd[1515]: time="2026-03-07T01:11:23.811544659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:23.811703 containerd[1515]: time="2026-03-07T01:11:23.811554885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.811703 containerd[1515]: time="2026-03-07T01:11:23.811617806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.813047 containerd[1515]: time="2026-03-07T01:11:23.812222188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:23.813047 containerd[1515]: time="2026-03-07T01:11:23.812256148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:23.813047 containerd[1515]: time="2026-03-07T01:11:23.812275891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.813047 containerd[1515]: time="2026-03-07T01:11:23.812332148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.818113 containerd[1515]: time="2026-03-07T01:11:23.817115684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:23.818113 containerd[1515]: time="2026-03-07T01:11:23.817164608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:23.818113 containerd[1515]: time="2026-03-07T01:11:23.817182746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.818113 containerd[1515]: time="2026-03-07T01:11:23.817247443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:23.838165 systemd[1]: Started cri-containerd-a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158.scope - libcontainer container a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158. Mar 7 01:11:23.842347 systemd[1]: Started cri-containerd-bf1324bdaba69c5f8324f3b735fdbe8bc08353a8a28d3824ee7d63804ac4433c.scope - libcontainer container bf1324bdaba69c5f8324f3b735fdbe8bc08353a8a28d3824ee7d63804ac4433c. Mar 7 01:11:23.847318 systemd[1]: Started cri-containerd-cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7.scope - libcontainer container cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7. Mar 7 01:11:23.888190 containerd[1515]: time="2026-03-07T01:11:23.888137774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-5ad0d165ec,Uid:40d26a8ac4a313ebd0183360fc8d3c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158\"" Mar 7 01:11:23.897225 containerd[1515]: time="2026-03-07T01:11:23.897150336Z" level=info msg="CreateContainer within sandbox \"a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:11:23.897827 containerd[1515]: time="2026-03-07T01:11:23.897808950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-5ad0d165ec,Uid:33f3376ea28bccb77a235836b1216b3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf1324bdaba69c5f8324f3b735fdbe8bc08353a8a28d3824ee7d63804ac4433c\"" Mar 7 01:11:23.903080 containerd[1515]: time="2026-03-07T01:11:23.903056021Z" level=info msg="CreateContainer within sandbox \"bf1324bdaba69c5f8324f3b735fdbe8bc08353a8a28d3824ee7d63804ac4433c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:11:23.910984 containerd[1515]: time="2026-03-07T01:11:23.910955493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-5ad0d165ec,Uid:965b4845e0a5765f19d354d6dcae0cd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7\"" Mar 7 01:11:23.914596 containerd[1515]: time="2026-03-07T01:11:23.914578472Z" level=info msg="CreateContainer within sandbox \"cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:11:23.917468 containerd[1515]: time="2026-03-07T01:11:23.917388941Z" level=info msg="CreateContainer within sandbox \"a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb\"" Mar 7 01:11:23.917923 containerd[1515]: time="2026-03-07T01:11:23.917904512Z" level=info msg="StartContainer for \"d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb\"" Mar 7 01:11:23.925778 containerd[1515]: time="2026-03-07T01:11:23.925730748Z" level=info msg="CreateContainer within sandbox \"bf1324bdaba69c5f8324f3b735fdbe8bc08353a8a28d3824ee7d63804ac4433c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a779f63e51760aeb7e0d2c4cd5f647cdb6012a3fcc9c88365752a2953eb026c8\"" Mar 7 01:11:23.927182 containerd[1515]: time="2026-03-07T01:11:23.927157716Z" level=info msg="StartContainer for \"a779f63e51760aeb7e0d2c4cd5f647cdb6012a3fcc9c88365752a2953eb026c8\"" Mar 7 01:11:23.938059 containerd[1515]: time="2026-03-07T01:11:23.938002589Z" level=info msg="CreateContainer within sandbox \"cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f\"" Mar 7 01:11:23.938399 containerd[1515]: time="2026-03-07T01:11:23.938386133Z" level=info msg="StartContainer for \"19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f\"" Mar 7 01:11:23.948430 systemd[1]: Started cri-containerd-d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb.scope - libcontainer container d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb. Mar 7 01:11:23.960866 systemd[1]: Started cri-containerd-a779f63e51760aeb7e0d2c4cd5f647cdb6012a3fcc9c88365752a2953eb026c8.scope - libcontainer container a779f63e51760aeb7e0d2c4cd5f647cdb6012a3fcc9c88365752a2953eb026c8. Mar 7 01:11:23.974722 systemd[1]: Started cri-containerd-19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f.scope - libcontainer container 19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f. Mar 7 01:11:24.021334 containerd[1515]: time="2026-03-07T01:11:24.020257647Z" level=info msg="StartContainer for \"d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb\" returns successfully" Mar 7 01:11:24.025785 containerd[1515]: time="2026-03-07T01:11:24.025727680Z" level=info msg="StartContainer for \"a779f63e51760aeb7e0d2c4cd5f647cdb6012a3fcc9c88365752a2953eb026c8\" returns successfully" Mar 7 01:11:24.027541 kubelet[2197]: E0307 01:11:24.027498 2197 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.156.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.156.177:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:11:24.033527 containerd[1515]: time="2026-03-07T01:11:24.033501132Z" level=info msg="StartContainer for \"19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f\" returns successfully" Mar 7 01:11:24.289398 kubelet[2197]: I0307 01:11:24.288956 2197 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:24.770159 kubelet[2197]: E0307 01:11:24.769973 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:24.770481 kubelet[2197]: E0307 01:11:24.770226 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:24.771662 kubelet[2197]: E0307 01:11:24.771647 2197 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.257219 kubelet[2197]: E0307 01:11:25.257186 2197 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-5ad0d165ec\" not found" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.313128 kubelet[2197]: I0307 01:11:25.313087 2197 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.313528 kubelet[2197]: I0307 01:11:25.313302 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.360966 kubelet[2197]: E0307 01:11:25.360932 2197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.360966 kubelet[2197]: I0307 01:11:25.360959 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.362297 kubelet[2197]: E0307 01:11:25.362250 2197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.362297 kubelet[2197]: I0307 01:11:25.362275 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.368338 kubelet[2197]: E0307 01:11:25.368319 2197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-5ad0d165ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.693833 kubelet[2197]: I0307 01:11:25.693751 2197 apiserver.go:52] "Watching apiserver" Mar 7 01:11:25.709202 kubelet[2197]: I0307 01:11:25.709127 2197 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:11:25.773029 kubelet[2197]: I0307 01:11:25.772817 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.773029 kubelet[2197]: I0307 01:11:25.772887 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.775838 kubelet[2197]: E0307 01:11:25.775587 2197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:25.776719 kubelet[2197]: E0307 01:11:25.776677 2197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-5ad0d165ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:26.774024 kubelet[2197]: I0307 01:11:26.773980 2197 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:27.346713 systemd[1]: Reloading requested from client PID 2479 ('systemctl') (unit session-7.scope)... Mar 7 01:11:27.346742 systemd[1]: Reloading... Mar 7 01:11:27.454329 zram_generator::config[2517]: No configuration found. Mar 7 01:11:27.552531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:11:27.626240 systemd[1]: Reloading finished in 278 ms. Mar 7 01:11:27.672558 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:27.694650 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:11:27.694839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:27.701678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:27.833713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:27.843188 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:11:27.880839 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:11:27.880839 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:11:27.880839 kubelet[2570]: I0307 01:11:27.880213 2570 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:11:27.886912 kubelet[2570]: I0307 01:11:27.886312 2570 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:11:27.886912 kubelet[2570]: I0307 01:11:27.886331 2570 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:11:27.886912 kubelet[2570]: I0307 01:11:27.886354 2570 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:11:27.886912 kubelet[2570]: I0307 01:11:27.886363 2570 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:11:27.886912 kubelet[2570]: I0307 01:11:27.886529 2570 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:11:27.887515 kubelet[2570]: I0307 01:11:27.887498 2570 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:11:27.889246 kubelet[2570]: I0307 01:11:27.889223 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:11:27.891103 kubelet[2570]: E0307 01:11:27.891077 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:11:27.891163 kubelet[2570]: I0307 01:11:27.891111 2570 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:11:27.893794 kubelet[2570]: I0307 01:11:27.893779 2570 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:11:27.893967 kubelet[2570]: I0307 01:11:27.893944 2570 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:11:27.894068 kubelet[2570]: I0307 01:11:27.893964 2570 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-5ad0d165ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:11:27.894068 kubelet[2570]: I0307 01:11:27.894066 2570 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:11:27.894144 kubelet[2570]: I0307 01:11:27.894073 2570 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:11:27.894144 kubelet[2570]: I0307 01:11:27.894090 2570 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:11:27.894215 kubelet[2570]: I0307 01:11:27.894202 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:11:27.894341 kubelet[2570]: I0307 01:11:27.894332 2570 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:11:27.894370 kubelet[2570]: I0307 01:11:27.894347 2570 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:11:27.894370 kubelet[2570]: I0307 01:11:27.894362 2570 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:11:27.894370 kubelet[2570]: I0307 01:11:27.894370 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:11:27.900514 kubelet[2570]: I0307 01:11:27.897465 2570 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:11:27.900514 kubelet[2570]: I0307 01:11:27.897784 2570 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:11:27.900514 kubelet[2570]: I0307 01:11:27.897799 2570 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:11:27.900624 kubelet[2570]: I0307 01:11:27.900526 2570 server.go:1262] "Started kubelet" Mar 7 01:11:27.905597 kubelet[2570]: I0307 01:11:27.905578 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:11:27.905865 kubelet[2570]: I0307 01:11:27.905843 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:11:27.906836 kubelet[2570]: I0307 01:11:27.906802 2570 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:11:27.909969 kubelet[2570]: I0307 01:11:27.909944 2570 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:11:27.910070 kubelet[2570]: I0307 01:11:27.910060 2570 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:11:27.910227 kubelet[2570]: I0307 01:11:27.910218 2570 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:11:27.913900 kubelet[2570]: I0307 01:11:27.913889 2570 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:11:27.914180 kubelet[2570]: I0307 01:11:27.914170 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:11:27.917017 kubelet[2570]: I0307 01:11:27.917004 2570 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:11:27.917179 kubelet[2570]: I0307 01:11:27.917172 2570 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:11:27.918419 kubelet[2570]: E0307 01:11:27.918402 2570 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:11:27.919463 kubelet[2570]: I0307 01:11:27.919446 2570 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:11:27.920345 kubelet[2570]: I0307 01:11:27.920329 2570 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:11:27.922386 kubelet[2570]: I0307 01:11:27.922366 2570 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:11:27.924661 kubelet[2570]: I0307 01:11:27.924635 2570 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:11:27.925681 kubelet[2570]: I0307 01:11:27.925671 2570 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:11:27.925733 kubelet[2570]: I0307 01:11:27.925727 2570 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:11:27.925772 kubelet[2570]: I0307 01:11:27.925766 2570 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:11:27.925833 kubelet[2570]: E0307 01:11:27.925823 2570 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:11:27.965401 kubelet[2570]: I0307 01:11:27.965372 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:11:27.965541 kubelet[2570]: I0307 01:11:27.965532 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:11:27.965581 kubelet[2570]: I0307 01:11:27.965576 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:11:27.965740 kubelet[2570]: I0307 01:11:27.965731 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:11:27.965805 kubelet[2570]: I0307 01:11:27.965792 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:11:27.965838 kubelet[2570]: I0307 01:11:27.965833 2570 policy_none.go:49] "None policy: Start" Mar 7 01:11:27.965887 kubelet[2570]: I0307 01:11:27.965881 2570 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:11:27.965955 kubelet[2570]: I0307 01:11:27.965921 2570 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:11:27.966067 kubelet[2570]: I0307 01:11:27.966060 2570 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:11:27.966098 kubelet[2570]: I0307 01:11:27.966093 2570 policy_none.go:47] "Start" Mar 7 01:11:27.969378 kubelet[2570]: E0307 01:11:27.969366 2570 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:11:27.969563 kubelet[2570]: I0307 01:11:27.969553 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:11:27.970493 kubelet[2570]: I0307 01:11:27.970388 2570 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:11:27.970740 kubelet[2570]: I0307 01:11:27.970728 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:11:27.971623 kubelet[2570]: E0307 01:11:27.971612 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:11:28.027438 kubelet[2570]: I0307 01:11:28.027412 2570 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.027792 kubelet[2570]: I0307 01:11:28.027571 2570 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.027868 kubelet[2570]: I0307 01:11:28.027675 2570 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.034742 kubelet[2570]: E0307 01:11:28.034666 2570 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.077831 kubelet[2570]: I0307 01:11:28.077745 2570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.088830 kubelet[2570]: I0307 01:11:28.088673 2570 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.088830 kubelet[2570]: I0307 01:11:28.088837 2570 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.218657 kubelet[2570]: I0307 01:11:28.218011 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.218657 kubelet[2570]: I0307 01:11:28.218075 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.218657 kubelet[2570]: I0307 01:11:28.218115 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.218657 kubelet[2570]: I0307 01:11:28.218151 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.218657 kubelet[2570]: I0307 01:11:28.218186 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.219232 kubelet[2570]: I0307 01:11:28.218219 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33f3376ea28bccb77a235836b1216b3e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" (UID: \"33f3376ea28bccb77a235836b1216b3e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.219232 kubelet[2570]: I0307 01:11:28.218250 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.219232 kubelet[2570]: I0307 01:11:28.218314 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/965b4845e0a5765f19d354d6dcae0cd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-5ad0d165ec\" (UID: \"965b4845e0a5765f19d354d6dcae0cd1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.219232 kubelet[2570]: I0307 01:11:28.218339 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40d26a8ac4a313ebd0183360fc8d3c4d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-5ad0d165ec\" (UID: \"40d26a8ac4a313ebd0183360fc8d3c4d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.357050 sudo[2607]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:11:28.357939 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:11:28.776614 sudo[2607]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:28.897197 kubelet[2570]: I0307 01:11:28.895727 2570 apiserver.go:52] "Watching apiserver" Mar 7 01:11:28.918208 kubelet[2570]: I0307 01:11:28.918109 2570 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:11:28.954492 kubelet[2570]: I0307 01:11:28.954310 2570 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:28.972317 kubelet[2570]: E0307 01:11:28.970797 2570 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5ad0d165ec\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" Mar 7 01:11:29.008409 kubelet[2570]: I0307 01:11:29.008367 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5ad0d165ec" podStartSLOduration=1.008257541 podStartE2EDuration="1.008257541s" podCreationTimestamp="2026-03-07 01:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:29.008064565 +0000 UTC m=+1.160237899" watchObservedRunningTime="2026-03-07 01:11:29.008257541 +0000 UTC m=+1.160430885" Mar 7 01:11:29.008862 kubelet[2570]: I0307 01:11:29.008831 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5ad0d165ec" podStartSLOduration=1.00882562 podStartE2EDuration="1.00882562s" podCreationTimestamp="2026-03-07 01:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:28.995384003 +0000 UTC m=+1.147557347" watchObservedRunningTime="2026-03-07 01:11:29.00882562 +0000 UTC m=+1.160998964" Mar 7 01:11:29.024510 kubelet[2570]: I0307 01:11:29.024480 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5ad0d165ec" podStartSLOduration=3.024470083 podStartE2EDuration="3.024470083s" podCreationTimestamp="2026-03-07 01:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:29.015980393 +0000 UTC m=+1.168153727" watchObservedRunningTime="2026-03-07 01:11:29.024470083 +0000 UTC m=+1.176643427" Mar 7 01:11:30.184791 sudo[1709]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:30.304994 sshd[1706]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:30.313511 systemd[1]: sshd@6-135.181.156.177:22-4.153.228.146:50872.service: Deactivated successfully. Mar 7 01:11:30.317899 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:11:30.318507 systemd[1]: session-7.scope: Consumed 4.746s CPU time, 157.0M memory peak, 0B memory swap peak. Mar 7 01:11:30.320395 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:11:30.322994 systemd-logind[1487]: Removed session 7. Mar 7 01:11:34.042501 kubelet[2570]: I0307 01:11:34.042367 2570 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:11:34.044394 containerd[1515]: time="2026-03-07T01:11:34.043508771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:11:34.045035 kubelet[2570]: I0307 01:11:34.043932 2570 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:11:35.151635 systemd[1]: Created slice kubepods-besteffort-pod064d7b8f_5f26_481a_8f5d_59f32fc8dbf4.slice - libcontainer container kubepods-besteffort-pod064d7b8f_5f26_481a_8f5d_59f32fc8dbf4.slice. Mar 7 01:11:35.161051 kubelet[2570]: I0307 01:11:35.160766 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-xtables-lock\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161051 kubelet[2570]: I0307 01:11:35.160797 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3711277-b527-45ae-a3f9-4ba9185cbf09-clustermesh-secrets\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161051 kubelet[2570]: I0307 01:11:35.160808 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-config-path\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161051 kubelet[2570]: I0307 01:11:35.160819 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/064d7b8f-5f26-481a-8f5d-59f32fc8dbf4-xtables-lock\") pod \"kube-proxy-ztrb4\" (UID: \"064d7b8f-5f26-481a-8f5d-59f32fc8dbf4\") " pod="kube-system/kube-proxy-ztrb4" Mar 7 01:11:35.161051 kubelet[2570]: I0307 01:11:35.160829 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44bb\" (UniqueName: \"kubernetes.io/projected/064d7b8f-5f26-481a-8f5d-59f32fc8dbf4-kube-api-access-v44bb\") pod \"kube-proxy-ztrb4\" (UID: \"064d7b8f-5f26-481a-8f5d-59f32fc8dbf4\") " pod="kube-system/kube-proxy-ztrb4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160839 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-cgroup\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160848 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-etc-cni-netd\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160858 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-kernel\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160870 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cni-path\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160879 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6k8l\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-kube-api-access-x6k8l\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161473 kubelet[2570]: I0307 01:11:35.160888 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-run\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160898 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/064d7b8f-5f26-481a-8f5d-59f32fc8dbf4-kube-proxy\") pod \"kube-proxy-ztrb4\" (UID: \"064d7b8f-5f26-481a-8f5d-59f32fc8dbf4\") " pod="kube-system/kube-proxy-ztrb4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160906 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-bpf-maps\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160916 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-net\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160925 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-hubble-tls\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160935 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/064d7b8f-5f26-481a-8f5d-59f32fc8dbf4-lib-modules\") pod \"kube-proxy-ztrb4\" (UID: \"064d7b8f-5f26-481a-8f5d-59f32fc8dbf4\") " pod="kube-system/kube-proxy-ztrb4" Mar 7 01:11:35.161575 kubelet[2570]: I0307 01:11:35.160945 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-hostproc\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.161700 kubelet[2570]: I0307 01:11:35.160957 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-lib-modules\") pod \"cilium-wtbw4\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " pod="kube-system/cilium-wtbw4" Mar 7 01:11:35.173425 systemd[1]: Created slice kubepods-burstable-pode3711277_b527_45ae_a3f9_4ba9185cbf09.slice - libcontainer container kubepods-burstable-pode3711277_b527_45ae_a3f9_4ba9185cbf09.slice. Mar 7 01:11:35.326776 systemd[1]: Created slice kubepods-besteffort-podb3a08d53_44b4_4b6b_b9ce_4d01e1ec7fd0.slice - libcontainer container kubepods-besteffort-podb3a08d53_44b4_4b6b_b9ce_4d01e1ec7fd0.slice. Mar 7 01:11:35.363757 kubelet[2570]: I0307 01:11:35.363684 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbc7\" (UniqueName: \"kubernetes.io/projected/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-kube-api-access-vxbc7\") pod \"cilium-operator-6f9c7c5859-bhmld\" (UID: \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\") " pod="kube-system/cilium-operator-6f9c7c5859-bhmld" Mar 7 01:11:35.363757 kubelet[2570]: I0307 01:11:35.363720 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-bhmld\" (UID: \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\") " pod="kube-system/cilium-operator-6f9c7c5859-bhmld" Mar 7 01:11:35.477666 containerd[1515]: time="2026-03-07T01:11:35.474818533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztrb4,Uid:064d7b8f-5f26-481a-8f5d-59f32fc8dbf4,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:35.480898 containerd[1515]: time="2026-03-07T01:11:35.480258740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtbw4,Uid:e3711277-b527-45ae-a3f9-4ba9185cbf09,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:35.531115 containerd[1515]: time="2026-03-07T01:11:35.530932473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:35.531115 containerd[1515]: time="2026-03-07T01:11:35.531001221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:35.531115 containerd[1515]: time="2026-03-07T01:11:35.531008807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.531115 containerd[1515]: time="2026-03-07T01:11:35.531073376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.534014 containerd[1515]: time="2026-03-07T01:11:35.533857714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:35.534102 containerd[1515]: time="2026-03-07T01:11:35.533961365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:35.534102 containerd[1515]: time="2026-03-07T01:11:35.533975288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.534369 containerd[1515]: time="2026-03-07T01:11:35.534313659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.547558 systemd[1]: Started cri-containerd-a0a4b4a7b5be4ac0c1be7de99ea77134f5a96338adcf3600ad4295eb4e37cf1b.scope - libcontainer container a0a4b4a7b5be4ac0c1be7de99ea77134f5a96338adcf3600ad4295eb4e37cf1b. Mar 7 01:11:35.550619 systemd[1]: Started cri-containerd-7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433.scope - libcontainer container 7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433. Mar 7 01:11:35.573634 containerd[1515]: time="2026-03-07T01:11:35.573596569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztrb4,Uid:064d7b8f-5f26-481a-8f5d-59f32fc8dbf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0a4b4a7b5be4ac0c1be7de99ea77134f5a96338adcf3600ad4295eb4e37cf1b\"" Mar 7 01:11:35.575658 containerd[1515]: time="2026-03-07T01:11:35.575593141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtbw4,Uid:e3711277-b527-45ae-a3f9-4ba9185cbf09,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\"" Mar 7 01:11:35.579649 containerd[1515]: time="2026-03-07T01:11:35.579111241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:11:35.581693 containerd[1515]: time="2026-03-07T01:11:35.581605027Z" level=info msg="CreateContainer within sandbox \"a0a4b4a7b5be4ac0c1be7de99ea77134f5a96338adcf3600ad4295eb4e37cf1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:11:35.596871 containerd[1515]: time="2026-03-07T01:11:35.596846422Z" level=info msg="CreateContainer within sandbox \"a0a4b4a7b5be4ac0c1be7de99ea77134f5a96338adcf3600ad4295eb4e37cf1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06606fa7858a46f548f2567475a8b5ec42b3c764308dd5c35b079617e5c14b4c\"" Mar 7 01:11:35.598060 containerd[1515]: time="2026-03-07T01:11:35.597479361Z" level=info msg="StartContainer for \"06606fa7858a46f548f2567475a8b5ec42b3c764308dd5c35b079617e5c14b4c\"" Mar 7 01:11:35.618392 systemd[1]: Started cri-containerd-06606fa7858a46f548f2567475a8b5ec42b3c764308dd5c35b079617e5c14b4c.scope - libcontainer container 06606fa7858a46f548f2567475a8b5ec42b3c764308dd5c35b079617e5c14b4c. Mar 7 01:11:35.631507 containerd[1515]: time="2026-03-07T01:11:35.631435431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bhmld,Uid:b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:35.641930 containerd[1515]: time="2026-03-07T01:11:35.641836702Z" level=info msg="StartContainer for \"06606fa7858a46f548f2567475a8b5ec42b3c764308dd5c35b079617e5c14b4c\" returns successfully" Mar 7 01:11:35.655296 containerd[1515]: time="2026-03-07T01:11:35.655154640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:35.655844 containerd[1515]: time="2026-03-07T01:11:35.655715014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:35.655844 containerd[1515]: time="2026-03-07T01:11:35.655763471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.655977 containerd[1515]: time="2026-03-07T01:11:35.655959228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:35.676428 systemd[1]: Started cri-containerd-241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f.scope - libcontainer container 241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f. Mar 7 01:11:35.710821 containerd[1515]: time="2026-03-07T01:11:35.710747578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bhmld,Uid:b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\"" Mar 7 01:11:37.364477 kubelet[2570]: I0307 01:11:37.364406 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztrb4" podStartSLOduration=2.363803049 podStartE2EDuration="2.363803049s" podCreationTimestamp="2026-03-07 01:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:35.987455294 +0000 UTC m=+8.139628668" watchObservedRunningTime="2026-03-07 01:11:37.363803049 +0000 UTC m=+9.515976413" Mar 7 01:11:39.442032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092406609.mount: Deactivated successfully. Mar 7 01:11:40.689447 containerd[1515]: time="2026-03-07T01:11:40.689372603Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:40.690682 containerd[1515]: time="2026-03-07T01:11:40.690516896Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:11:40.691778 containerd[1515]: time="2026-03-07T01:11:40.691743907Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:40.693413 containerd[1515]: time="2026-03-07T01:11:40.693385438Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.114249296s" Mar 7 01:11:40.693461 containerd[1515]: time="2026-03-07T01:11:40.693414011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:11:40.695140 containerd[1515]: time="2026-03-07T01:11:40.695109938Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:11:40.697631 containerd[1515]: time="2026-03-07T01:11:40.697550035Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:11:40.717040 containerd[1515]: time="2026-03-07T01:11:40.717006442Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\"" Mar 7 01:11:40.717597 containerd[1515]: time="2026-03-07T01:11:40.717545771Z" level=info msg="StartContainer for \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\"" Mar 7 01:11:40.754415 systemd[1]: Started cri-containerd-5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d.scope - libcontainer container 5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d. Mar 7 01:11:40.778592 containerd[1515]: time="2026-03-07T01:11:40.778482988Z" level=info msg="StartContainer for \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\" returns successfully" Mar 7 01:11:40.791066 systemd[1]: cri-containerd-5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d.scope: Deactivated successfully. Mar 7 01:11:40.891661 containerd[1515]: time="2026-03-07T01:11:40.891605164Z" level=info msg="shim disconnected" id=5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d namespace=k8s.io Mar 7 01:11:40.891661 containerd[1515]: time="2026-03-07T01:11:40.891648393Z" level=warning msg="cleaning up after shim disconnected" id=5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d namespace=k8s.io Mar 7 01:11:40.891661 containerd[1515]: time="2026-03-07T01:11:40.891655291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:40.990130 containerd[1515]: time="2026-03-07T01:11:40.989604186Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:11:41.007002 containerd[1515]: time="2026-03-07T01:11:41.006946461Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\"" Mar 7 01:11:41.007865 containerd[1515]: time="2026-03-07T01:11:41.007828493Z" level=info msg="StartContainer for \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\"" Mar 7 01:11:41.038796 systemd[1]: Started cri-containerd-adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5.scope - libcontainer container adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5. Mar 7 01:11:41.070464 containerd[1515]: time="2026-03-07T01:11:41.070428194Z" level=info msg="StartContainer for \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\" returns successfully" Mar 7 01:11:41.080636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:11:41.080933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:11:41.081018 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:11:41.088559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:11:41.088794 systemd[1]: cri-containerd-adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5.scope: Deactivated successfully. Mar 7 01:11:41.102551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:11:41.113787 containerd[1515]: time="2026-03-07T01:11:41.113727991Z" level=info msg="shim disconnected" id=adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5 namespace=k8s.io Mar 7 01:11:41.113787 containerd[1515]: time="2026-03-07T01:11:41.113782259Z" level=warning msg="cleaning up after shim disconnected" id=adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5 namespace=k8s.io Mar 7 01:11:41.113787 containerd[1515]: time="2026-03-07T01:11:41.113789987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:41.204918 update_engine[1490]: I20260307 01:11:41.204834 1490 update_attempter.cc:509] Updating boot flags... Mar 7 01:11:41.246363 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3106) Mar 7 01:11:41.313608 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3105) Mar 7 01:11:41.707604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d-rootfs.mount: Deactivated successfully. Mar 7 01:11:41.999435 containerd[1515]: time="2026-03-07T01:11:41.999025903Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:11:42.033107 containerd[1515]: time="2026-03-07T01:11:42.033040199Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\"" Mar 7 01:11:42.035330 containerd[1515]: time="2026-03-07T01:11:42.035134669Z" level=info msg="StartContainer for \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\"" Mar 7 01:11:42.076392 systemd[1]: Started cri-containerd-8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3.scope - libcontainer container 8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3. Mar 7 01:11:42.101236 containerd[1515]: time="2026-03-07T01:11:42.101164039Z" level=info msg="StartContainer for \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\" returns successfully" Mar 7 01:11:42.104882 systemd[1]: cri-containerd-8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3.scope: Deactivated successfully. Mar 7 01:11:42.127248 containerd[1515]: time="2026-03-07T01:11:42.127212441Z" level=info msg="shim disconnected" id=8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3 namespace=k8s.io Mar 7 01:11:42.127439 containerd[1515]: time="2026-03-07T01:11:42.127400826Z" level=warning msg="cleaning up after shim disconnected" id=8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3 namespace=k8s.io Mar 7 01:11:42.127439 containerd[1515]: time="2026-03-07T01:11:42.127413804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:42.705822 systemd[1]: run-containerd-runc-k8s.io-8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3-runc.E7fEaB.mount: Deactivated successfully. Mar 7 01:11:42.705919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3-rootfs.mount: Deactivated successfully. Mar 7 01:11:42.749697 containerd[1515]: time="2026-03-07T01:11:42.749649144Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:42.750664 containerd[1515]: time="2026-03-07T01:11:42.750545087Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:11:42.752594 containerd[1515]: time="2026-03-07T01:11:42.751540472Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:42.752689 containerd[1515]: time="2026-03-07T01:11:42.752665393Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.057516163s" Mar 7 01:11:42.752737 containerd[1515]: time="2026-03-07T01:11:42.752726851Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:11:42.756597 containerd[1515]: time="2026-03-07T01:11:42.756575464Z" level=info msg="CreateContainer within sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:11:42.767422 containerd[1515]: time="2026-03-07T01:11:42.767397871Z" level=info msg="CreateContainer within sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\"" Mar 7 01:11:42.769453 containerd[1515]: time="2026-03-07T01:11:42.769435771Z" level=info msg="StartContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\"" Mar 7 01:11:42.795830 systemd[1]: Started cri-containerd-b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa.scope - libcontainer container b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa. Mar 7 01:11:42.819292 containerd[1515]: time="2026-03-07T01:11:42.818840190Z" level=info msg="StartContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" returns successfully" Mar 7 01:11:42.999480 containerd[1515]: time="2026-03-07T01:11:42.999378338Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:11:43.013721 containerd[1515]: time="2026-03-07T01:11:43.013666246Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\"" Mar 7 01:11:43.019527 containerd[1515]: time="2026-03-07T01:11:43.018203685Z" level=info msg="StartContainer for \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\"" Mar 7 01:11:43.024822 kubelet[2570]: I0307 01:11:43.024304 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-bhmld" podStartSLOduration=0.982554743 podStartE2EDuration="8.024291524s" podCreationTimestamp="2026-03-07 01:11:35 +0000 UTC" firstStartedPulling="2026-03-07 01:11:35.711729992 +0000 UTC m=+7.863903326" lastFinishedPulling="2026-03-07 01:11:42.753466763 +0000 UTC m=+14.905640107" observedRunningTime="2026-03-07 01:11:43.006302756 +0000 UTC m=+15.158476090" watchObservedRunningTime="2026-03-07 01:11:43.024291524 +0000 UTC m=+15.176464868" Mar 7 01:11:43.044483 systemd[1]: Started cri-containerd-c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16.scope - libcontainer container c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16. Mar 7 01:11:43.067729 systemd[1]: cri-containerd-c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16.scope: Deactivated successfully. Mar 7 01:11:43.069718 containerd[1515]: time="2026-03-07T01:11:43.069491046Z" level=info msg="StartContainer for \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\" returns successfully" Mar 7 01:11:43.124547 containerd[1515]: time="2026-03-07T01:11:43.124480593Z" level=info msg="shim disconnected" id=c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16 namespace=k8s.io Mar 7 01:11:43.124547 containerd[1515]: time="2026-03-07T01:11:43.124547755Z" level=warning msg="cleaning up after shim disconnected" id=c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16 namespace=k8s.io Mar 7 01:11:43.124547 containerd[1515]: time="2026-03-07T01:11:43.124555684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:43.136978 containerd[1515]: time="2026-03-07T01:11:43.136930339Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:11:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:11:44.004719 containerd[1515]: time="2026-03-07T01:11:44.004619010Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:11:44.022925 containerd[1515]: time="2026-03-07T01:11:44.022901444Z" level=info msg="CreateContainer within sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\"" Mar 7 01:11:44.028598 containerd[1515]: time="2026-03-07T01:11:44.028418996Z" level=info msg="StartContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\"" Mar 7 01:11:44.069372 systemd[1]: Started cri-containerd-116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87.scope - libcontainer container 116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87. Mar 7 01:11:44.093870 containerd[1515]: time="2026-03-07T01:11:44.093837712Z" level=info msg="StartContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" returns successfully" Mar 7 01:11:44.246389 kubelet[2570]: I0307 01:11:44.246353 2570 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:11:44.286215 systemd[1]: Created slice kubepods-burstable-podf0398289_b4b2_4909_88bb_a2e54d729405.slice - libcontainer container kubepods-burstable-podf0398289_b4b2_4909_88bb_a2e54d729405.slice. Mar 7 01:11:44.295051 systemd[1]: Created slice kubepods-burstable-pod9edc8c1d_c270_445a_b059_ee04fe80b079.slice - libcontainer container kubepods-burstable-pod9edc8c1d_c270_445a_b059_ee04fe80b079.slice. Mar 7 01:11:44.324765 kubelet[2570]: I0307 01:11:44.324662 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnl4\" (UniqueName: \"kubernetes.io/projected/f0398289-b4b2-4909-88bb-a2e54d729405-kube-api-access-kqnl4\") pod \"coredns-66bc5c9577-4jtph\" (UID: \"f0398289-b4b2-4909-88bb-a2e54d729405\") " pod="kube-system/coredns-66bc5c9577-4jtph" Mar 7 01:11:44.324765 kubelet[2570]: I0307 01:11:44.324691 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fvjq\" (UniqueName: \"kubernetes.io/projected/9edc8c1d-c270-445a-b059-ee04fe80b079-kube-api-access-7fvjq\") pod \"coredns-66bc5c9577-qpplv\" (UID: \"9edc8c1d-c270-445a-b059-ee04fe80b079\") " pod="kube-system/coredns-66bc5c9577-qpplv" Mar 7 01:11:44.324765 kubelet[2570]: I0307 01:11:44.324704 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9edc8c1d-c270-445a-b059-ee04fe80b079-config-volume\") pod \"coredns-66bc5c9577-qpplv\" (UID: \"9edc8c1d-c270-445a-b059-ee04fe80b079\") " pod="kube-system/coredns-66bc5c9577-qpplv" Mar 7 01:11:44.324765 kubelet[2570]: I0307 01:11:44.324720 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0398289-b4b2-4909-88bb-a2e54d729405-config-volume\") pod \"coredns-66bc5c9577-4jtph\" (UID: \"f0398289-b4b2-4909-88bb-a2e54d729405\") " pod="kube-system/coredns-66bc5c9577-4jtph" Mar 7 01:11:44.595034 containerd[1515]: time="2026-03-07T01:11:44.594919572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jtph,Uid:f0398289-b4b2-4909-88bb-a2e54d729405,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:44.600524 containerd[1515]: time="2026-03-07T01:11:44.600488618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpplv,Uid:9edc8c1d-c270-445a-b059-ee04fe80b079,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:44.710102 systemd[1]: run-containerd-runc-k8s.io-116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87-runc.e4tXAq.mount: Deactivated successfully. Mar 7 01:11:45.041284 kubelet[2570]: I0307 01:11:45.041221 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wtbw4" podStartSLOduration=4.924899681 podStartE2EDuration="10.041209638s" podCreationTimestamp="2026-03-07 01:11:35 +0000 UTC" firstStartedPulling="2026-03-07 01:11:35.577886863 +0000 UTC m=+7.730060207" lastFinishedPulling="2026-03-07 01:11:40.69419683 +0000 UTC m=+12.846370164" observedRunningTime="2026-03-07 01:11:45.040543908 +0000 UTC m=+17.192717242" watchObservedRunningTime="2026-03-07 01:11:45.041209638 +0000 UTC m=+17.193382982" Mar 7 01:11:46.210569 systemd-networkd[1419]: cilium_host: Link UP Mar 7 01:11:46.213438 systemd-networkd[1419]: cilium_net: Link UP Mar 7 01:11:46.214790 systemd-networkd[1419]: cilium_net: Gained carrier Mar 7 01:11:46.217546 systemd-networkd[1419]: cilium_host: Gained carrier Mar 7 01:11:46.309347 systemd-networkd[1419]: cilium_vxlan: Link UP Mar 7 01:11:46.309355 systemd-networkd[1419]: cilium_vxlan: Gained carrier Mar 7 01:11:46.420400 systemd-networkd[1419]: cilium_host: Gained IPv6LL Mar 7 01:11:46.479295 kernel: NET: Registered PF_ALG protocol family Mar 7 01:11:46.995185 systemd-networkd[1419]: lxc_health: Link UP Mar 7 01:11:47.007558 systemd-networkd[1419]: lxc_health: Gained carrier Mar 7 01:11:47.069437 systemd-networkd[1419]: cilium_net: Gained IPv6LL Mar 7 01:11:47.137573 systemd-networkd[1419]: lxccdb39d6dcb01: Link UP Mar 7 01:11:47.145348 kernel: eth0: renamed from tmpbdab5 Mar 7 01:11:47.148987 systemd-networkd[1419]: lxccdb39d6dcb01: Gained carrier Mar 7 01:11:47.160061 systemd-networkd[1419]: lxceac7f4111b78: Link UP Mar 7 01:11:47.165447 kernel: eth0: renamed from tmpa40ad Mar 7 01:11:47.173757 systemd-networkd[1419]: lxceac7f4111b78: Gained carrier Mar 7 01:11:48.220443 systemd-networkd[1419]: lxceac7f4111b78: Gained IPv6LL Mar 7 01:11:48.286845 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 7 01:11:48.287158 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Mar 7 01:11:48.797133 systemd-networkd[1419]: lxccdb39d6dcb01: Gained IPv6LL Mar 7 01:11:49.644709 containerd[1515]: time="2026-03-07T01:11:49.644637998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:49.645057 containerd[1515]: time="2026-03-07T01:11:49.644728841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:49.645057 containerd[1515]: time="2026-03-07T01:11:49.644755998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:49.645057 containerd[1515]: time="2026-03-07T01:11:49.644845741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:49.673515 systemd[1]: Started cri-containerd-a40ad6ae99f165067d52f92b61eb68a280fab950578b3ce7751cb55d0d0e16f4.scope - libcontainer container a40ad6ae99f165067d52f92b61eb68a280fab950578b3ce7751cb55d0d0e16f4. Mar 7 01:11:49.717770 containerd[1515]: time="2026-03-07T01:11:49.717435441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:49.717770 containerd[1515]: time="2026-03-07T01:11:49.717473838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:49.717770 containerd[1515]: time="2026-03-07T01:11:49.717485307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:49.717770 containerd[1515]: time="2026-03-07T01:11:49.717550751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:49.743549 containerd[1515]: time="2026-03-07T01:11:49.743361221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpplv,Uid:9edc8c1d-c270-445a-b059-ee04fe80b079,Namespace:kube-system,Attempt:0,} returns sandbox id \"a40ad6ae99f165067d52f92b61eb68a280fab950578b3ce7751cb55d0d0e16f4\"" Mar 7 01:11:49.746383 systemd[1]: Started cri-containerd-bdab562aa30c6f039e6826360285e99bc86da80b8497bc3b90cc2eeace9321cc.scope - libcontainer container bdab562aa30c6f039e6826360285e99bc86da80b8497bc3b90cc2eeace9321cc. Mar 7 01:11:49.749956 containerd[1515]: time="2026-03-07T01:11:49.749938742Z" level=info msg="CreateContainer within sandbox \"a40ad6ae99f165067d52f92b61eb68a280fab950578b3ce7751cb55d0d0e16f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:11:49.765291 containerd[1515]: time="2026-03-07T01:11:49.764745748Z" level=info msg="CreateContainer within sandbox \"a40ad6ae99f165067d52f92b61eb68a280fab950578b3ce7751cb55d0d0e16f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bfe33453428f8ef376e1765dfe694df998bd2ac8429d81802cdf30ebaac3107\"" Mar 7 01:11:49.767519 containerd[1515]: time="2026-03-07T01:11:49.766442586Z" level=info msg="StartContainer for \"2bfe33453428f8ef376e1765dfe694df998bd2ac8429d81802cdf30ebaac3107\"" Mar 7 01:11:49.786369 systemd[1]: Started cri-containerd-2bfe33453428f8ef376e1765dfe694df998bd2ac8429d81802cdf30ebaac3107.scope - libcontainer container 2bfe33453428f8ef376e1765dfe694df998bd2ac8429d81802cdf30ebaac3107. Mar 7 01:11:49.808484 containerd[1515]: time="2026-03-07T01:11:49.808433708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jtph,Uid:f0398289-b4b2-4909-88bb-a2e54d729405,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdab562aa30c6f039e6826360285e99bc86da80b8497bc3b90cc2eeace9321cc\"" Mar 7 01:11:49.815613 containerd[1515]: time="2026-03-07T01:11:49.815527656Z" level=info msg="CreateContainer within sandbox \"bdab562aa30c6f039e6826360285e99bc86da80b8497bc3b90cc2eeace9321cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:11:49.824250 containerd[1515]: time="2026-03-07T01:11:49.824109761Z" level=info msg="StartContainer for \"2bfe33453428f8ef376e1765dfe694df998bd2ac8429d81802cdf30ebaac3107\" returns successfully" Mar 7 01:11:49.832571 containerd[1515]: time="2026-03-07T01:11:49.832536399Z" level=info msg="CreateContainer within sandbox \"bdab562aa30c6f039e6826360285e99bc86da80b8497bc3b90cc2eeace9321cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c72300d4b481b388e71a4bea00ab444ebe64eedbb2e0d36d48f479f231e2ec9\"" Mar 7 01:11:49.833294 containerd[1515]: time="2026-03-07T01:11:49.832931526Z" level=info msg="StartContainer for \"4c72300d4b481b388e71a4bea00ab444ebe64eedbb2e0d36d48f479f231e2ec9\"" Mar 7 01:11:49.862388 systemd[1]: Started cri-containerd-4c72300d4b481b388e71a4bea00ab444ebe64eedbb2e0d36d48f479f231e2ec9.scope - libcontainer container 4c72300d4b481b388e71a4bea00ab444ebe64eedbb2e0d36d48f479f231e2ec9. Mar 7 01:11:49.882514 containerd[1515]: time="2026-03-07T01:11:49.882477947Z" level=info msg="StartContainer for \"4c72300d4b481b388e71a4bea00ab444ebe64eedbb2e0d36d48f479f231e2ec9\" returns successfully" Mar 7 01:11:50.033381 kubelet[2570]: I0307 01:11:50.033345 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qpplv" podStartSLOduration=15.033332686 podStartE2EDuration="15.033332686s" podCreationTimestamp="2026-03-07 01:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:50.031954195 +0000 UTC m=+22.184127539" watchObservedRunningTime="2026-03-07 01:11:50.033332686 +0000 UTC m=+22.185506020" Mar 7 01:11:50.042935 kubelet[2570]: I0307 01:11:50.042881 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4jtph" podStartSLOduration=15.042868513 podStartE2EDuration="15.042868513s" podCreationTimestamp="2026-03-07 01:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:50.042730784 +0000 UTC m=+22.194904118" watchObservedRunningTime="2026-03-07 01:11:50.042868513 +0000 UTC m=+22.195041847" Mar 7 01:11:54.555233 kubelet[2570]: I0307 01:11:54.554979 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:12:40.920656 systemd[1]: Started sshd@7-135.181.156.177:22-186.96.145.241:60868.service - OpenSSH per-connection server daemon (186.96.145.241:60868). Mar 7 01:12:41.645401 sshd[3975]: Invalid user guest-2s6ogj from 186.96.145.241 port 60868 Mar 7 01:12:41.817905 sshd[3975]: Connection closed by invalid user guest-2s6ogj 186.96.145.241 port 60868 [preauth] Mar 7 01:12:41.819980 systemd[1]: sshd@7-135.181.156.177:22-186.96.145.241:60868.service: Deactivated successfully. Mar 7 01:12:51.150353 systemd[1]: Started sshd@8-135.181.156.177:22-4.153.228.146:44746.service - OpenSSH per-connection server daemon (4.153.228.146:44746). Mar 7 01:12:51.901257 sshd[3980]: Accepted publickey for core from 4.153.228.146 port 44746 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:12:51.904032 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:51.909381 systemd-logind[1487]: New session 8 of user core. Mar 7 01:12:51.917444 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:12:52.545674 sshd[3980]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:52.551229 systemd[1]: sshd@8-135.181.156.177:22-4.153.228.146:44746.service: Deactivated successfully. Mar 7 01:12:52.555132 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:12:52.557927 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:12:52.560635 systemd-logind[1487]: Removed session 8. Mar 7 01:12:57.682661 systemd[1]: Started sshd@9-135.181.156.177:22-4.153.228.146:44760.service - OpenSSH per-connection server daemon (4.153.228.146:44760). Mar 7 01:12:58.445690 sshd[3994]: Accepted publickey for core from 4.153.228.146 port 44760 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:12:58.448777 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:58.458399 systemd-logind[1487]: New session 9 of user core. Mar 7 01:12:58.470533 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:12:59.043353 sshd[3994]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:59.051257 systemd[1]: sshd@9-135.181.156.177:22-4.153.228.146:44760.service: Deactivated successfully. Mar 7 01:12:59.055902 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:12:59.057965 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:12:59.060003 systemd-logind[1487]: Removed session 9. Mar 7 01:13:04.187469 systemd[1]: Started sshd@10-135.181.156.177:22-4.153.228.146:42758.service - OpenSSH per-connection server daemon (4.153.228.146:42758). Mar 7 01:13:04.947329 sshd[4008]: Accepted publickey for core from 4.153.228.146 port 42758 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:04.950590 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:04.960018 systemd-logind[1487]: New session 10 of user core. Mar 7 01:13:04.967533 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:13:05.573707 sshd[4008]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:05.579159 systemd[1]: sshd@10-135.181.156.177:22-4.153.228.146:42758.service: Deactivated successfully. Mar 7 01:13:05.582900 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:13:05.584407 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:13:05.585936 systemd-logind[1487]: Removed session 10. Mar 7 01:13:05.710584 systemd[1]: Started sshd@11-135.181.156.177:22-4.153.228.146:42770.service - OpenSSH per-connection server daemon (4.153.228.146:42770). Mar 7 01:13:06.476798 sshd[4022]: Accepted publickey for core from 4.153.228.146 port 42770 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:06.479672 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:06.487802 systemd-logind[1487]: New session 11 of user core. Mar 7 01:13:06.492521 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:13:07.109563 sshd[4022]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:07.115973 systemd[1]: sshd@11-135.181.156.177:22-4.153.228.146:42770.service: Deactivated successfully. Mar 7 01:13:07.120080 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:13:07.121594 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:13:07.123327 systemd-logind[1487]: Removed session 11. Mar 7 01:13:07.245297 systemd[1]: Started sshd@12-135.181.156.177:22-4.153.228.146:42782.service - OpenSSH per-connection server daemon (4.153.228.146:42782). Mar 7 01:13:08.007168 sshd[4035]: Accepted publickey for core from 4.153.228.146 port 42782 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:08.010550 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:08.019818 systemd-logind[1487]: New session 12 of user core. Mar 7 01:13:08.024543 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:13:08.616959 sshd[4035]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:08.622193 systemd[1]: sshd@12-135.181.156.177:22-4.153.228.146:42782.service: Deactivated successfully. Mar 7 01:13:08.626503 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:13:08.629195 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:13:08.631381 systemd-logind[1487]: Removed session 12. Mar 7 01:13:13.757720 systemd[1]: Started sshd@13-135.181.156.177:22-4.153.228.146:49666.service - OpenSSH per-connection server daemon (4.153.228.146:49666). Mar 7 01:13:14.513938 sshd[4048]: Accepted publickey for core from 4.153.228.146 port 49666 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:14.515702 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:14.524640 systemd-logind[1487]: New session 13 of user core. Mar 7 01:13:14.527397 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:13:15.126043 sshd[4048]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:15.133930 systemd[1]: sshd@13-135.181.156.177:22-4.153.228.146:49666.service: Deactivated successfully. Mar 7 01:13:15.139190 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:13:15.141331 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:13:15.143523 systemd-logind[1487]: Removed session 13. Mar 7 01:13:15.267133 systemd[1]: Started sshd@14-135.181.156.177:22-4.153.228.146:49680.service - OpenSSH per-connection server daemon (4.153.228.146:49680). Mar 7 01:13:16.028342 sshd[4061]: Accepted publickey for core from 4.153.228.146 port 49680 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:16.031463 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:16.040393 systemd-logind[1487]: New session 14 of user core. Mar 7 01:13:16.045559 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:13:16.639348 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:16.643872 systemd[1]: sshd@14-135.181.156.177:22-4.153.228.146:49680.service: Deactivated successfully. Mar 7 01:13:16.648137 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:13:16.652078 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:13:16.653261 systemd-logind[1487]: Removed session 14. Mar 7 01:13:16.776488 systemd[1]: Started sshd@15-135.181.156.177:22-4.153.228.146:49688.service - OpenSSH per-connection server daemon (4.153.228.146:49688). Mar 7 01:13:17.509324 sshd[4072]: Accepted publickey for core from 4.153.228.146 port 49688 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:17.511716 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:17.521362 systemd-logind[1487]: New session 15 of user core. Mar 7 01:13:17.528501 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:13:18.603832 sshd[4072]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:18.607786 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:13:18.607802 systemd[1]: sshd@15-135.181.156.177:22-4.153.228.146:49688.service: Deactivated successfully. Mar 7 01:13:18.610122 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:13:18.611200 systemd-logind[1487]: Removed session 15. Mar 7 01:13:18.745203 systemd[1]: Started sshd@16-135.181.156.177:22-4.153.228.146:49696.service - OpenSSH per-connection server daemon (4.153.228.146:49696). Mar 7 01:13:19.499428 sshd[4089]: Accepted publickey for core from 4.153.228.146 port 49696 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:19.502753 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:19.513078 systemd-logind[1487]: New session 16 of user core. Mar 7 01:13:19.523597 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:13:20.154838 sshd[4089]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:20.160633 systemd[1]: sshd@16-135.181.156.177:22-4.153.228.146:49696.service: Deactivated successfully. Mar 7 01:13:20.165171 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:13:20.168015 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:13:20.170383 systemd-logind[1487]: Removed session 16. Mar 7 01:13:20.291716 systemd[1]: Started sshd@17-135.181.156.177:22-4.153.228.146:52752.service - OpenSSH per-connection server daemon (4.153.228.146:52752). Mar 7 01:13:21.051315 sshd[4102]: Accepted publickey for core from 4.153.228.146 port 52752 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:21.052958 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:21.060635 systemd-logind[1487]: New session 17 of user core. Mar 7 01:13:21.068518 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:13:21.657694 sshd[4102]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:21.664641 systemd[1]: sshd@17-135.181.156.177:22-4.153.228.146:52752.service: Deactivated successfully. Mar 7 01:13:21.668786 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:13:21.671611 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:13:21.674015 systemd-logind[1487]: Removed session 17. Mar 7 01:13:26.798890 systemd[1]: Started sshd@18-135.181.156.177:22-4.153.228.146:52766.service - OpenSSH per-connection server daemon (4.153.228.146:52766). Mar 7 01:13:27.556242 sshd[4118]: Accepted publickey for core from 4.153.228.146 port 52766 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:27.557678 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:27.563735 systemd-logind[1487]: New session 18 of user core. Mar 7 01:13:27.569563 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:13:28.141382 sshd[4118]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:28.147463 systemd[1]: sshd@18-135.181.156.177:22-4.153.228.146:52766.service: Deactivated successfully. Mar 7 01:13:28.150808 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:13:28.151771 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:13:28.152993 systemd-logind[1487]: Removed session 18. Mar 7 01:13:33.278256 systemd[1]: Started sshd@19-135.181.156.177:22-4.153.228.146:39106.service - OpenSSH per-connection server daemon (4.153.228.146:39106). Mar 7 01:13:34.034235 sshd[4133]: Accepted publickey for core from 4.153.228.146 port 39106 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:34.037408 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:34.047019 systemd-logind[1487]: New session 19 of user core. Mar 7 01:13:34.050853 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:13:34.652924 sshd[4133]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:34.660807 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:13:34.662780 systemd[1]: sshd@19-135.181.156.177:22-4.153.228.146:39106.service: Deactivated successfully. Mar 7 01:13:34.667129 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:13:34.669093 systemd-logind[1487]: Removed session 19. Mar 7 01:13:34.788001 systemd[1]: Started sshd@20-135.181.156.177:22-4.153.228.146:39114.service - OpenSSH per-connection server daemon (4.153.228.146:39114). Mar 7 01:13:35.541735 sshd[4146]: Accepted publickey for core from 4.153.228.146 port 39114 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:35.544543 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:35.552506 systemd-logind[1487]: New session 20 of user core. Mar 7 01:13:35.558472 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:13:37.283975 containerd[1515]: time="2026-03-07T01:13:37.283936079Z" level=info msg="StopContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" with timeout 30 (s)" Mar 7 01:13:37.284694 containerd[1515]: time="2026-03-07T01:13:37.284609927Z" level=info msg="Stop container \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" with signal terminated" Mar 7 01:13:37.298437 containerd[1515]: time="2026-03-07T01:13:37.298392417Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:13:37.307371 containerd[1515]: time="2026-03-07T01:13:37.307347326Z" level=info msg="StopContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" with timeout 2 (s)" Mar 7 01:13:37.307758 containerd[1515]: time="2026-03-07T01:13:37.307744316Z" level=info msg="Stop container \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" with signal terminated" Mar 7 01:13:37.315450 systemd[1]: cri-containerd-b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa.scope: Deactivated successfully. Mar 7 01:13:37.315896 systemd-networkd[1419]: lxc_health: Link DOWN Mar 7 01:13:37.315900 systemd-networkd[1419]: lxc_health: Lost carrier Mar 7 01:13:37.337860 systemd[1]: cri-containerd-116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87.scope: Deactivated successfully. Mar 7 01:13:37.338068 systemd[1]: cri-containerd-116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87.scope: Consumed 5.298s CPU time. Mar 7 01:13:37.349836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa-rootfs.mount: Deactivated successfully. Mar 7 01:13:37.362103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87-rootfs.mount: Deactivated successfully. Mar 7 01:13:37.367198 containerd[1515]: time="2026-03-07T01:13:37.367157663Z" level=info msg="shim disconnected" id=b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa namespace=k8s.io Mar 7 01:13:37.367587 containerd[1515]: time="2026-03-07T01:13:37.367456152Z" level=warning msg="cleaning up after shim disconnected" id=b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa namespace=k8s.io Mar 7 01:13:37.367587 containerd[1515]: time="2026-03-07T01:13:37.367467712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:37.367801 containerd[1515]: time="2026-03-07T01:13:37.367441082Z" level=info msg="shim disconnected" id=116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87 namespace=k8s.io Mar 7 01:13:37.367874 containerd[1515]: time="2026-03-07T01:13:37.367863692Z" level=warning msg="cleaning up after shim disconnected" id=116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87 namespace=k8s.io Mar 7 01:13:37.367938 containerd[1515]: time="2026-03-07T01:13:37.367899162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:37.382324 containerd[1515]: time="2026-03-07T01:13:37.382300210Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:13:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:13:37.384795 containerd[1515]: time="2026-03-07T01:13:37.384753115Z" level=info msg="StopContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" returns successfully" Mar 7 01:13:37.385616 containerd[1515]: time="2026-03-07T01:13:37.385587242Z" level=info msg="StopPodSandbox for \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\"" Mar 7 01:13:37.385674 containerd[1515]: time="2026-03-07T01:13:37.385618252Z" level=info msg="Container to stop \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.385989 containerd[1515]: time="2026-03-07T01:13:37.385786972Z" level=info msg="StopContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" returns successfully" Mar 7 01:13:37.387252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f-shm.mount: Deactivated successfully. Mar 7 01:13:37.387661 containerd[1515]: time="2026-03-07T01:13:37.387643358Z" level=info msg="StopPodSandbox for \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\"" Mar 7 01:13:37.387691 containerd[1515]: time="2026-03-07T01:13:37.387669418Z" level=info msg="Container to stop \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.387691 containerd[1515]: time="2026-03-07T01:13:37.387679148Z" level=info msg="Container to stop \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.387691 containerd[1515]: time="2026-03-07T01:13:37.387686798Z" level=info msg="Container to stop \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.387748 containerd[1515]: time="2026-03-07T01:13:37.387695478Z" level=info msg="Container to stop \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.387748 containerd[1515]: time="2026-03-07T01:13:37.387703098Z" level=info msg="Container to stop \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:13:37.390814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433-shm.mount: Deactivated successfully. Mar 7 01:13:37.395220 systemd[1]: cri-containerd-7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433.scope: Deactivated successfully. Mar 7 01:13:37.398104 systemd[1]: cri-containerd-241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f.scope: Deactivated successfully. Mar 7 01:13:37.425911 containerd[1515]: time="2026-03-07T01:13:37.425852293Z" level=info msg="shim disconnected" id=241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f namespace=k8s.io Mar 7 01:13:37.425911 containerd[1515]: time="2026-03-07T01:13:37.425898113Z" level=warning msg="cleaning up after shim disconnected" id=241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f namespace=k8s.io Mar 7 01:13:37.425911 containerd[1515]: time="2026-03-07T01:13:37.425904933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:37.428248 containerd[1515]: time="2026-03-07T01:13:37.428020048Z" level=info msg="shim disconnected" id=7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433 namespace=k8s.io Mar 7 01:13:37.428248 containerd[1515]: time="2026-03-07T01:13:37.428045508Z" level=warning msg="cleaning up after shim disconnected" id=7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433 namespace=k8s.io Mar 7 01:13:37.428248 containerd[1515]: time="2026-03-07T01:13:37.428051878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:37.440173 containerd[1515]: time="2026-03-07T01:13:37.440141581Z" level=info msg="TearDown network for sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" successfully" Mar 7 01:13:37.440173 containerd[1515]: time="2026-03-07T01:13:37.440164971Z" level=info msg="StopPodSandbox for \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" returns successfully" Mar 7 01:13:37.442895 containerd[1515]: time="2026-03-07T01:13:37.442751385Z" level=info msg="TearDown network for sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" successfully" Mar 7 01:13:37.442895 containerd[1515]: time="2026-03-07T01:13:37.442773035Z" level=info msg="StopPodSandbox for \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" returns successfully" Mar 7 01:13:37.498921 kubelet[2570]: I0307 01:13:37.498871 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.498921 kubelet[2570]: I0307 01:13:37.498914 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-cgroup\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.498947 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-lib-modules\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.498958 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-bpf-maps\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.498970 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-xtables-lock\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.498994 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3711277-b527-45ae-a3f9-4ba9185cbf09-clustermesh-secrets\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.499005 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-config-path\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499365 kubelet[2570]: I0307 01:13:37.499014 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-run\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499026 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-hubble-tls\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499035 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbc7\" (UniqueName: \"kubernetes.io/projected/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-kube-api-access-vxbc7\") pod \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\" (UID: \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499045 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-kernel\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499055 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cni-path\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499065 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6k8l\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-kube-api-access-x6k8l\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499511 kubelet[2570]: I0307 01:13:37.499077 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-cilium-config-path\") pod \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\" (UID: \"b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0\") " Mar 7 01:13:37.499688 kubelet[2570]: I0307 01:13:37.499087 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-etc-cni-netd\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499688 kubelet[2570]: I0307 01:13:37.499096 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-hostproc\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499688 kubelet[2570]: I0307 01:13:37.499105 2570 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-net\") pod \"e3711277-b527-45ae-a3f9-4ba9185cbf09\" (UID: \"e3711277-b527-45ae-a3f9-4ba9185cbf09\") " Mar 7 01:13:37.499688 kubelet[2570]: I0307 01:13:37.499132 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.499688 kubelet[2570]: I0307 01:13:37.499145 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.499789 kubelet[2570]: I0307 01:13:37.499156 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.499789 kubelet[2570]: I0307 01:13:37.499167 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.499789 kubelet[2570]: I0307 01:13:37.499474 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.501890 kubelet[2570]: I0307 01:13:37.501848 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:13:37.501890 kubelet[2570]: I0307 01:13:37.501879 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.502204 kubelet[2570]: I0307 01:13:37.502084 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.504379 kubelet[2570]: I0307 01:13:37.503613 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.504379 kubelet[2570]: I0307 01:13:37.503640 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:13:37.504379 kubelet[2570]: I0307 01:13:37.503698 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3711277-b527-45ae-a3f9-4ba9185cbf09-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:13:37.506369 kubelet[2570]: I0307 01:13:37.506247 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:13:37.506711 kubelet[2570]: I0307 01:13:37.506692 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-kube-api-access-vxbc7" (OuterVolumeSpecName: "kube-api-access-vxbc7") pod "b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0" (UID: "b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0"). InnerVolumeSpecName "kube-api-access-vxbc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:13:37.507683 kubelet[2570]: I0307 01:13:37.507666 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0" (UID: "b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:13:37.508253 kubelet[2570]: I0307 01:13:37.508211 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-kube-api-access-x6k8l" (OuterVolumeSpecName: "kube-api-access-x6k8l") pod "e3711277-b527-45ae-a3f9-4ba9185cbf09" (UID: "e3711277-b527-45ae-a3f9-4ba9185cbf09"). InnerVolumeSpecName "kube-api-access-x6k8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599781 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-cgroup\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599828 2570 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-lib-modules\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599842 2570 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-bpf-maps\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599856 2570 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-xtables-lock\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599869 2570 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3711277-b527-45ae-a3f9-4ba9185cbf09-clustermesh-secrets\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599885 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-config-path\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599899 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cilium-run\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.599962 kubelet[2570]: I0307 01:13:37.599913 2570 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-hubble-tls\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.599926 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxbc7\" (UniqueName: \"kubernetes.io/projected/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-kube-api-access-vxbc7\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.599942 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.599957 2570 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-cni-path\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.599974 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6k8l\" (UniqueName: \"kubernetes.io/projected/e3711277-b527-45ae-a3f9-4ba9185cbf09-kube-api-access-x6k8l\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.599989 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0-cilium-config-path\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.600004 2570 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-etc-cni-netd\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.600018 2570 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-hostproc\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.600475 kubelet[2570]: I0307 01:13:37.600031 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3711277-b527-45ae-a3f9-4ba9185cbf09-host-proc-sys-net\") on node \"ci-4081-3-6-n-5ad0d165ec\" DevicePath \"\"" Mar 7 01:13:37.944824 systemd[1]: Removed slice kubepods-besteffort-podb3a08d53_44b4_4b6b_b9ce_4d01e1ec7fd0.slice - libcontainer container kubepods-besteffort-podb3a08d53_44b4_4b6b_b9ce_4d01e1ec7fd0.slice. Mar 7 01:13:37.950593 systemd[1]: Removed slice kubepods-burstable-pode3711277_b527_45ae_a3f9_4ba9185cbf09.slice - libcontainer container kubepods-burstable-pode3711277_b527_45ae_a3f9_4ba9185cbf09.slice. Mar 7 01:13:37.950752 systemd[1]: kubepods-burstable-pode3711277_b527_45ae_a3f9_4ba9185cbf09.slice: Consumed 5.375s CPU time. Mar 7 01:13:38.010404 kubelet[2570]: E0307 01:13:38.010254 2570 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:13:38.276099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f-rootfs.mount: Deactivated successfully. Mar 7 01:13:38.276915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433-rootfs.mount: Deactivated successfully. Mar 7 01:13:38.277200 systemd[1]: var-lib-kubelet-pods-b3a08d53\x2d44b4\x2d4b6b\x2db9ce\x2d4d01e1ec7fd0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxbc7.mount: Deactivated successfully. Mar 7 01:13:38.277421 systemd[1]: var-lib-kubelet-pods-e3711277\x2db527\x2d45ae\x2da3f9\x2d4ba9185cbf09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6k8l.mount: Deactivated successfully. Mar 7 01:13:38.277563 systemd[1]: var-lib-kubelet-pods-e3711277\x2db527\x2d45ae\x2da3f9\x2d4ba9185cbf09-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:13:38.277702 systemd[1]: var-lib-kubelet-pods-e3711277\x2db527\x2d45ae\x2da3f9\x2d4ba9185cbf09-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:13:38.299311 kubelet[2570]: I0307 01:13:38.299201 2570 scope.go:117] "RemoveContainer" containerID="116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87" Mar 7 01:13:38.305386 containerd[1515]: time="2026-03-07T01:13:38.304775046Z" level=info msg="RemoveContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\"" Mar 7 01:13:38.319434 containerd[1515]: time="2026-03-07T01:13:38.318510762Z" level=info msg="RemoveContainer for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" returns successfully" Mar 7 01:13:38.319632 kubelet[2570]: I0307 01:13:38.318969 2570 scope.go:117] "RemoveContainer" containerID="c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16" Mar 7 01:13:38.321558 containerd[1515]: time="2026-03-07T01:13:38.321320666Z" level=info msg="RemoveContainer for \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\"" Mar 7 01:13:38.340867 containerd[1515]: time="2026-03-07T01:13:38.340101330Z" level=info msg="RemoveContainer for \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\" returns successfully" Mar 7 01:13:38.344637 kubelet[2570]: I0307 01:13:38.343902 2570 scope.go:117] "RemoveContainer" containerID="8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3" Mar 7 01:13:38.352036 containerd[1515]: time="2026-03-07T01:13:38.352013861Z" level=info msg="RemoveContainer for \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\"" Mar 7 01:13:38.355338 containerd[1515]: time="2026-03-07T01:13:38.355313143Z" level=info msg="RemoveContainer for \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\" returns successfully" Mar 7 01:13:38.355575 kubelet[2570]: I0307 01:13:38.355514 2570 scope.go:117] "RemoveContainer" containerID="adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5" Mar 7 01:13:38.357288 containerd[1515]: time="2026-03-07T01:13:38.357102689Z" level=info msg="RemoveContainer for \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\"" Mar 7 01:13:38.361695 containerd[1515]: time="2026-03-07T01:13:38.361675787Z" level=info msg="RemoveContainer for \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\" returns successfully" Mar 7 01:13:38.362484 kubelet[2570]: I0307 01:13:38.362447 2570 scope.go:117] "RemoveContainer" containerID="5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d" Mar 7 01:13:38.364815 containerd[1515]: time="2026-03-07T01:13:38.364709210Z" level=info msg="RemoveContainer for \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\"" Mar 7 01:13:38.368259 containerd[1515]: time="2026-03-07T01:13:38.368191951Z" level=info msg="RemoveContainer for \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\" returns successfully" Mar 7 01:13:38.368389 kubelet[2570]: I0307 01:13:38.368340 2570 scope.go:117] "RemoveContainer" containerID="116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87" Mar 7 01:13:38.368542 containerd[1515]: time="2026-03-07T01:13:38.368502471Z" level=error msg="ContainerStatus for \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\": not found" Mar 7 01:13:38.368622 kubelet[2570]: E0307 01:13:38.368603 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\": not found" containerID="116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87" Mar 7 01:13:38.368677 kubelet[2570]: I0307 01:13:38.368625 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87"} err="failed to get container status \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\": rpc error: code = NotFound desc = an error occurred when try to find container \"116319793b041e16eff25da112bd0a2841cc5782f3e5c13a956b04284bdf2d87\": not found" Mar 7 01:13:38.368677 kubelet[2570]: I0307 01:13:38.368654 2570 scope.go:117] "RemoveContainer" containerID="c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16" Mar 7 01:13:38.368856 containerd[1515]: time="2026-03-07T01:13:38.368810780Z" level=error msg="ContainerStatus for \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\": not found" Mar 7 01:13:38.368903 kubelet[2570]: E0307 01:13:38.368881 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\": not found" containerID="c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16" Mar 7 01:13:38.368955 kubelet[2570]: I0307 01:13:38.368942 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16"} err="failed to get container status \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\": rpc error: code = NotFound desc = an error occurred when try to find container \"c779664fc6882feff477fb1b113955d86c32036aaee86f32284bd7ed3775fc16\": not found" Mar 7 01:13:38.368955 kubelet[2570]: I0307 01:13:38.368953 2570 scope.go:117] "RemoveContainer" containerID="8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3" Mar 7 01:13:38.369080 containerd[1515]: time="2026-03-07T01:13:38.369054090Z" level=error msg="ContainerStatus for \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\": not found" Mar 7 01:13:38.369154 kubelet[2570]: E0307 01:13:38.369130 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\": not found" containerID="8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3" Mar 7 01:13:38.369154 kubelet[2570]: I0307 01:13:38.369145 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3"} err="failed to get container status \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d08ff4f45c901d1ddd366c632b2ea5f18cb774337bc4e693f745b3079fc06b3\": not found" Mar 7 01:13:38.369212 kubelet[2570]: I0307 01:13:38.369157 2570 scope.go:117] "RemoveContainer" containerID="adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5" Mar 7 01:13:38.369382 containerd[1515]: time="2026-03-07T01:13:38.369347879Z" level=error msg="ContainerStatus for \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\": not found" Mar 7 01:13:38.369437 kubelet[2570]: E0307 01:13:38.369416 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\": not found" containerID="adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5" Mar 7 01:13:38.369437 kubelet[2570]: I0307 01:13:38.369432 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5"} err="failed to get container status \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\": rpc error: code = NotFound desc = an error occurred when try to find container \"adbee2662b90845f245affb107894cabce002172a5ae6080a0fe7bc0ac4f1ef5\": not found" Mar 7 01:13:38.369476 kubelet[2570]: I0307 01:13:38.369441 2570 scope.go:117] "RemoveContainer" containerID="5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d" Mar 7 01:13:38.369957 containerd[1515]: time="2026-03-07T01:13:38.369701478Z" level=error msg="ContainerStatus for \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\": not found" Mar 7 01:13:38.371347 kubelet[2570]: E0307 01:13:38.371326 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\": not found" containerID="5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d" Mar 7 01:13:38.371347 kubelet[2570]: I0307 01:13:38.371345 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d"} err="failed to get container status \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bb5bcafceca9df59d314f5f9c5ed4fbea302c66aa1cea760648e797d8a0fd9d\": not found" Mar 7 01:13:38.371425 kubelet[2570]: I0307 01:13:38.371354 2570 scope.go:117] "RemoveContainer" containerID="b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa" Mar 7 01:13:38.375479 containerd[1515]: time="2026-03-07T01:13:38.375454604Z" level=info msg="RemoveContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\"" Mar 7 01:13:38.378380 containerd[1515]: time="2026-03-07T01:13:38.378354047Z" level=info msg="RemoveContainer for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" returns successfully" Mar 7 01:13:38.378471 kubelet[2570]: I0307 01:13:38.378452 2570 scope.go:117] "RemoveContainer" containerID="b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa" Mar 7 01:13:38.378620 containerd[1515]: time="2026-03-07T01:13:38.378588966Z" level=error msg="ContainerStatus for \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\": not found" Mar 7 01:13:38.378762 kubelet[2570]: E0307 01:13:38.378738 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\": not found" containerID="b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa" Mar 7 01:13:38.378762 kubelet[2570]: I0307 01:13:38.378756 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa"} err="failed to get container status \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7d8b04c7bd3b27e616c35be144a35df55671a3b5006a85e8a93a6d57057bbaa\": not found" Mar 7 01:13:39.327918 sshd[4146]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:39.335774 systemd[1]: sshd@20-135.181.156.177:22-4.153.228.146:39114.service: Deactivated successfully. Mar 7 01:13:39.336498 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:13:39.341037 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:13:39.343196 systemd-logind[1487]: Removed session 20. Mar 7 01:13:39.466761 systemd[1]: Started sshd@21-135.181.156.177:22-4.153.228.146:40032.service - OpenSSH per-connection server daemon (4.153.228.146:40032). Mar 7 01:13:39.931567 kubelet[2570]: I0307 01:13:39.931509 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0" path="/var/lib/kubelet/pods/b3a08d53-44b4-4b6b-b9ce-4d01e1ec7fd0/volumes" Mar 7 01:13:39.932721 kubelet[2570]: I0307 01:13:39.932622 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3711277-b527-45ae-a3f9-4ba9185cbf09" path="/var/lib/kubelet/pods/e3711277-b527-45ae-a3f9-4ba9185cbf09/volumes" Mar 7 01:13:40.211030 sshd[4312]: Accepted publickey for core from 4.153.228.146 port 40032 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:40.214035 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:40.222412 systemd-logind[1487]: New session 21 of user core. Mar 7 01:13:40.229519 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:13:40.511668 kubelet[2570]: I0307 01:13:40.511212 2570 setters.go:543] "Node became not ready" node="ci-4081-3-6-n-5ad0d165ec" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T01:13:40Z","lastTransitionTime":"2026-03-07T01:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 01:13:41.035410 systemd[1]: Created slice kubepods-burstable-pod62f552f8_5e66_4e1b_ad43_437f5e69f695.slice - libcontainer container kubepods-burstable-pod62f552f8_5e66_4e1b_ad43_437f5e69f695.slice. Mar 7 01:13:41.121061 kubelet[2570]: I0307 01:13:41.121016 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62f552f8-5e66-4e1b-ad43-437f5e69f695-cilium-ipsec-secrets\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121061 kubelet[2570]: I0307 01:13:41.121053 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-xtables-lock\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121061 kubelet[2570]: I0307 01:13:41.121065 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-cilium-run\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121061 kubelet[2570]: I0307 01:13:41.121075 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-cni-path\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121061 kubelet[2570]: I0307 01:13:41.121084 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-lib-modules\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121094 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62f552f8-5e66-4e1b-ad43-437f5e69f695-cilium-config-path\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121103 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-bpf-maps\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121112 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-hostproc\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121120 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62f552f8-5e66-4e1b-ad43-437f5e69f695-clustermesh-secrets\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121129 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-host-proc-sys-net\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121567 kubelet[2570]: I0307 01:13:41.121139 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62f552f8-5e66-4e1b-ad43-437f5e69f695-hubble-tls\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121667 kubelet[2570]: I0307 01:13:41.121148 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-etc-cni-netd\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121667 kubelet[2570]: I0307 01:13:41.121159 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-host-proc-sys-kernel\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121667 kubelet[2570]: I0307 01:13:41.121169 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pts\" (UniqueName: \"kubernetes.io/projected/62f552f8-5e66-4e1b-ad43-437f5e69f695-kube-api-access-s6pts\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.121667 kubelet[2570]: I0307 01:13:41.121178 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62f552f8-5e66-4e1b-ad43-437f5e69f695-cilium-cgroup\") pod \"cilium-xfcwh\" (UID: \"62f552f8-5e66-4e1b-ad43-437f5e69f695\") " pod="kube-system/cilium-xfcwh" Mar 7 01:13:41.193112 sshd[4312]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:41.199538 systemd[1]: sshd@21-135.181.156.177:22-4.153.228.146:40032.service: Deactivated successfully. Mar 7 01:13:41.203455 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:13:41.207106 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:13:41.209436 systemd-logind[1487]: Removed session 21. Mar 7 01:13:41.330040 systemd[1]: Started sshd@22-135.181.156.177:22-4.153.228.146:40042.service - OpenSSH per-connection server daemon (4.153.228.146:40042). Mar 7 01:13:41.341951 containerd[1515]: time="2026-03-07T01:13:41.341488470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfcwh,Uid:62f552f8-5e66-4e1b-ad43-437f5e69f695,Namespace:kube-system,Attempt:0,}" Mar 7 01:13:41.379329 containerd[1515]: time="2026-03-07T01:13:41.378770508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:13:41.379329 containerd[1515]: time="2026-03-07T01:13:41.379062707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:13:41.379329 containerd[1515]: time="2026-03-07T01:13:41.379167376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:13:41.380407 containerd[1515]: time="2026-03-07T01:13:41.380134094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:13:41.400399 systemd[1]: Started cri-containerd-a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a.scope - libcontainer container a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a. Mar 7 01:13:41.419699 containerd[1515]: time="2026-03-07T01:13:41.419619685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfcwh,Uid:62f552f8-5e66-4e1b-ad43-437f5e69f695,Namespace:kube-system,Attempt:0,} returns sandbox id \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\"" Mar 7 01:13:41.423673 containerd[1515]: time="2026-03-07T01:13:41.423585903Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:13:41.432935 containerd[1515]: time="2026-03-07T01:13:41.432912296Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84\"" Mar 7 01:13:41.433963 containerd[1515]: time="2026-03-07T01:13:41.433457334Z" level=info msg="StartContainer for \"defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84\"" Mar 7 01:13:41.460397 systemd[1]: Started cri-containerd-defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84.scope - libcontainer container defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84. Mar 7 01:13:41.480685 containerd[1515]: time="2026-03-07T01:13:41.480617901Z" level=info msg="StartContainer for \"defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84\" returns successfully" Mar 7 01:13:41.491108 systemd[1]: cri-containerd-defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84.scope: Deactivated successfully. Mar 7 01:13:41.519989 containerd[1515]: time="2026-03-07T01:13:41.519913614Z" level=info msg="shim disconnected" id=defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84 namespace=k8s.io Mar 7 01:13:41.519989 containerd[1515]: time="2026-03-07T01:13:41.519971984Z" level=warning msg="cleaning up after shim disconnected" id=defe293ec5a09fd7a597ed2f72a077accc618cc7d943380cca95e398c8634c84 namespace=k8s.io Mar 7 01:13:41.519989 containerd[1515]: time="2026-03-07T01:13:41.519979254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:42.085332 sshd[4328]: Accepted publickey for core from 4.153.228.146 port 40042 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:42.087496 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:42.095592 systemd-logind[1487]: New session 22 of user core. Mar 7 01:13:42.100540 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:13:42.331712 containerd[1515]: time="2026-03-07T01:13:42.331187540Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:13:42.360708 containerd[1515]: time="2026-03-07T01:13:42.358610152Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2\"" Mar 7 01:13:42.360708 containerd[1515]: time="2026-03-07T01:13:42.360167587Z" level=info msg="StartContainer for \"385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2\"" Mar 7 01:13:42.402473 systemd[1]: Started cri-containerd-385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2.scope - libcontainer container 385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2. Mar 7 01:13:42.423517 containerd[1515]: time="2026-03-07T01:13:42.423474186Z" level=info msg="StartContainer for \"385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2\" returns successfully" Mar 7 01:13:42.429304 systemd[1]: cri-containerd-385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2.scope: Deactivated successfully. Mar 7 01:13:42.456716 containerd[1515]: time="2026-03-07T01:13:42.456656370Z" level=info msg="shim disconnected" id=385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2 namespace=k8s.io Mar 7 01:13:42.456716 containerd[1515]: time="2026-03-07T01:13:42.456697800Z" level=warning msg="cleaning up after shim disconnected" id=385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2 namespace=k8s.io Mar 7 01:13:42.456716 containerd[1515]: time="2026-03-07T01:13:42.456706210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:42.600149 sshd[4328]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:42.605457 systemd[1]: sshd@22-135.181.156.177:22-4.153.228.146:40042.service: Deactivated successfully. Mar 7 01:13:42.609622 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:13:42.612059 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:13:42.614934 systemd-logind[1487]: Removed session 22. Mar 7 01:13:42.735772 systemd[1]: Started sshd@23-135.181.156.177:22-4.153.228.146:40050.service - OpenSSH per-connection server daemon (4.153.228.146:40050). Mar 7 01:13:43.012475 kubelet[2570]: E0307 01:13:43.012232 2570 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:13:43.240047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-385bf6ff3338ab3c7c00ee3b1b922afea21461b10093322c743cd455db0703c2-rootfs.mount: Deactivated successfully. Mar 7 01:13:43.338300 containerd[1515]: time="2026-03-07T01:13:43.337352540Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:13:43.373743 containerd[1515]: time="2026-03-07T01:13:43.372327962Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36\"" Mar 7 01:13:43.375002 containerd[1515]: time="2026-03-07T01:13:43.374189116Z" level=info msg="StartContainer for \"9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36\"" Mar 7 01:13:43.416389 systemd[1]: Started cri-containerd-9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36.scope - libcontainer container 9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36. Mar 7 01:13:43.451691 containerd[1515]: time="2026-03-07T01:13:43.451249217Z" level=info msg="StartContainer for \"9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36\" returns successfully" Mar 7 01:13:43.454935 systemd[1]: cri-containerd-9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36.scope: Deactivated successfully. Mar 7 01:13:43.475468 containerd[1515]: time="2026-03-07T01:13:43.475331326Z" level=info msg="shim disconnected" id=9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36 namespace=k8s.io Mar 7 01:13:43.475468 containerd[1515]: time="2026-03-07T01:13:43.475378346Z" level=warning msg="cleaning up after shim disconnected" id=9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36 namespace=k8s.io Mar 7 01:13:43.475468 containerd[1515]: time="2026-03-07T01:13:43.475389526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:43.499553 sshd[4503]: Accepted publickey for core from 4.153.228.146 port 40050 ssh2: RSA SHA256:cfLbcynJBGQiJlcpT05nBKNU4f9DyADpOV1ay9ga6kI Mar 7 01:13:43.500744 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:43.504397 systemd-logind[1487]: New session 23 of user core. Mar 7 01:13:43.510389 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:13:44.238710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e9ec56b0f83de295b920d8ec88a74dbcd8945d884b069dc058824b900797c36-rootfs.mount: Deactivated successfully. Mar 7 01:13:44.341791 containerd[1515]: time="2026-03-07T01:13:44.341579473Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:13:44.367859 containerd[1515]: time="2026-03-07T01:13:44.367746791Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5\"" Mar 7 01:13:44.367886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197432803.mount: Deactivated successfully. Mar 7 01:13:44.369773 containerd[1515]: time="2026-03-07T01:13:44.369729444Z" level=info msg="StartContainer for \"b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5\"" Mar 7 01:13:44.414488 systemd[1]: Started cri-containerd-b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5.scope - libcontainer container b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5. Mar 7 01:13:44.435239 systemd[1]: cri-containerd-b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5.scope: Deactivated successfully. Mar 7 01:13:44.437595 containerd[1515]: time="2026-03-07T01:13:44.437553695Z" level=info msg="StartContainer for \"b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5\" returns successfully" Mar 7 01:13:44.457891 containerd[1515]: time="2026-03-07T01:13:44.457725143Z" level=info msg="shim disconnected" id=b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5 namespace=k8s.io Mar 7 01:13:44.457891 containerd[1515]: time="2026-03-07T01:13:44.457773743Z" level=warning msg="cleaning up after shim disconnected" id=b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5 namespace=k8s.io Mar 7 01:13:44.457891 containerd[1515]: time="2026-03-07T01:13:44.457780813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:13:45.240063 systemd[1]: run-containerd-runc-k8s.io-b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5-runc.I1aZHG.mount: Deactivated successfully. Mar 7 01:13:45.240260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08b876acc32036f3bdfa825de95d9eed05e104eb929e1d0189d9755a4c1f9d5-rootfs.mount: Deactivated successfully. Mar 7 01:13:45.350319 containerd[1515]: time="2026-03-07T01:13:45.349475354Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:13:45.380329 containerd[1515]: time="2026-03-07T01:13:45.379109205Z" level=info msg="CreateContainer within sandbox \"a138b982eb68132f466fcd6749a24d4c16783434c306dee950704df38fcd409a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5\"" Mar 7 01:13:45.380724 containerd[1515]: time="2026-03-07T01:13:45.380539030Z" level=info msg="StartContainer for \"f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5\"" Mar 7 01:13:45.422562 systemd[1]: Started cri-containerd-f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5.scope - libcontainer container f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5. Mar 7 01:13:45.449138 containerd[1515]: time="2026-03-07T01:13:45.449059647Z" level=info msg="StartContainer for \"f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5\" returns successfully" Mar 7 01:13:45.783340 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 01:13:48.264577 systemd[1]: run-containerd-runc-k8s.io-f48bcde6389d751610270bb90d5258feecbded15c0b6b13a13e00d7d056006f5-runc.e6Cw23.mount: Deactivated successfully. Mar 7 01:13:48.471123 systemd-networkd[1419]: lxc_health: Link UP Mar 7 01:13:48.495816 systemd-networkd[1419]: lxc_health: Gained carrier Mar 7 01:13:49.353723 kubelet[2570]: I0307 01:13:49.353671 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xfcwh" podStartSLOduration=8.353659549 podStartE2EDuration="8.353659549s" podCreationTimestamp="2026-03-07 01:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:13:46.366419317 +0000 UTC m=+138.518592701" watchObservedRunningTime="2026-03-07 01:13:49.353659549 +0000 UTC m=+141.505832883" Mar 7 01:13:50.525414 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 7 01:13:54.876585 sshd[4503]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:54.884452 systemd[1]: sshd@23-135.181.156.177:22-4.153.228.146:40050.service: Deactivated successfully. Mar 7 01:13:54.888745 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:13:54.890426 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:13:54.892196 systemd-logind[1487]: Removed session 23. Mar 7 01:14:11.262013 kubelet[2570]: E0307 01:14:11.261953 2570 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38770->10.0.0.2:2379: read: connection timed out" Mar 7 01:14:11.445925 systemd[1]: cri-containerd-19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f.scope: Deactivated successfully. Mar 7 01:14:11.447547 systemd[1]: cri-containerd-19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f.scope: Consumed 3.684s CPU time, 17.4M memory peak, 0B memory swap peak. Mar 7 01:14:11.486504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f-rootfs.mount: Deactivated successfully. Mar 7 01:14:11.495891 containerd[1515]: time="2026-03-07T01:14:11.495806293Z" level=info msg="shim disconnected" id=19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f namespace=k8s.io Mar 7 01:14:11.495891 containerd[1515]: time="2026-03-07T01:14:11.495878773Z" level=warning msg="cleaning up after shim disconnected" id=19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f namespace=k8s.io Mar 7 01:14:11.495891 containerd[1515]: time="2026-03-07T01:14:11.495894663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:14:12.415365 kubelet[2570]: I0307 01:14:12.415303 2570 scope.go:117] "RemoveContainer" containerID="19cd0ef86b8d3c5aede58794f21af74082a4c76e9ea7602f0b300cfcda9cdb2f" Mar 7 01:14:12.418307 containerd[1515]: time="2026-03-07T01:14:12.418220213Z" level=info msg="CreateContainer within sandbox \"cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:14:12.439025 containerd[1515]: time="2026-03-07T01:14:12.438510548Z" level=info msg="CreateContainer within sandbox \"cdba41843873901aa30bb8914876cbaec6b9ef8b95ea25dfead3dfc52f6d9ec7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f552959078563088e821246f08afa51eb5a35e32cbdaf3ad69d93898f005954b\"" Mar 7 01:14:12.441075 containerd[1515]: time="2026-03-07T01:14:12.439436612Z" level=info msg="StartContainer for \"f552959078563088e821246f08afa51eb5a35e32cbdaf3ad69d93898f005954b\"" Mar 7 01:14:12.482404 systemd[1]: Started cri-containerd-f552959078563088e821246f08afa51eb5a35e32cbdaf3ad69d93898f005954b.scope - libcontainer container f552959078563088e821246f08afa51eb5a35e32cbdaf3ad69d93898f005954b. Mar 7 01:14:12.520518 containerd[1515]: time="2026-03-07T01:14:12.520487387Z" level=info msg="StartContainer for \"f552959078563088e821246f08afa51eb5a35e32cbdaf3ad69d93898f005954b\" returns successfully" Mar 7 01:14:15.794165 systemd[1]: cri-containerd-d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb.scope: Deactivated successfully. Mar 7 01:14:15.794796 systemd[1]: cri-containerd-d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb.scope: Consumed 2.536s CPU time, 16.1M memory peak, 0B memory swap peak. Mar 7 01:14:15.841573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb-rootfs.mount: Deactivated successfully. Mar 7 01:14:15.856737 containerd[1515]: time="2026-03-07T01:14:15.856649209Z" level=info msg="shim disconnected" id=d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb namespace=k8s.io Mar 7 01:14:15.856737 containerd[1515]: time="2026-03-07T01:14:15.856726338Z" level=warning msg="cleaning up after shim disconnected" id=d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb namespace=k8s.io Mar 7 01:14:15.856737 containerd[1515]: time="2026-03-07T01:14:15.856743528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:14:15.874387 kubelet[2570]: E0307 01:14:15.873341 2570 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38390->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-5ad0d165ec.189a6a15d97245a2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-5ad0d165ec,UID:33f3376ea28bccb77a235836b1216b3e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-5ad0d165ec,},FirstTimestamp:2026-03-07 01:14:05.398713762 +0000 UTC m=+157.550887136,LastTimestamp:2026-03-07 01:14:05.398713762 +0000 UTC m=+157.550887136,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-5ad0d165ec,}" Mar 7 01:14:16.432288 kubelet[2570]: I0307 01:14:16.432201 2570 scope.go:117] "RemoveContainer" containerID="d992fe436a350e974660b788031099b6d8363eb44a5d2a14dea6b5c1d816b0cb" Mar 7 01:14:16.434631 containerd[1515]: time="2026-03-07T01:14:16.434500827Z" level=info msg="CreateContainer within sandbox \"a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:14:16.454243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217239031.mount: Deactivated successfully. Mar 7 01:14:16.454800 containerd[1515]: time="2026-03-07T01:14:16.454410081Z" level=info msg="CreateContainer within sandbox \"a0d3a5b8b7254ade64ea0393815063a63cae7e741dfd2588543889953c649158\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"33356546af3befee372207fd1e1bd8abf128a6e97b9065452df338d929b0132b\"" Mar 7 01:14:16.456059 containerd[1515]: time="2026-03-07T01:14:16.456014019Z" level=info msg="StartContainer for \"33356546af3befee372207fd1e1bd8abf128a6e97b9065452df338d929b0132b\"" Mar 7 01:14:16.510406 systemd[1]: Started cri-containerd-33356546af3befee372207fd1e1bd8abf128a6e97b9065452df338d929b0132b.scope - libcontainer container 33356546af3befee372207fd1e1bd8abf128a6e97b9065452df338d929b0132b. Mar 7 01:14:16.542687 containerd[1515]: time="2026-03-07T01:14:16.542591036Z" level=info msg="StartContainer for \"33356546af3befee372207fd1e1bd8abf128a6e97b9065452df338d929b0132b\" returns successfully" Mar 7 01:14:21.263398 kubelet[2570]: E0307 01:14:21.263245 2570 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-n-5ad0d165ec)" Mar 7 01:14:27.928940 containerd[1515]: time="2026-03-07T01:14:27.928788207Z" level=info msg="StopPodSandbox for \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\"" Mar 7 01:14:27.928940 containerd[1515]: time="2026-03-07T01:14:27.928921186Z" level=info msg="TearDown network for sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" successfully" Mar 7 01:14:27.928940 containerd[1515]: time="2026-03-07T01:14:27.928941076Z" level=info msg="StopPodSandbox for \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" returns successfully" Mar 7 01:14:27.931555 containerd[1515]: time="2026-03-07T01:14:27.930193606Z" level=info msg="RemovePodSandbox for \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\"" Mar 7 01:14:27.931555 containerd[1515]: time="2026-03-07T01:14:27.930233136Z" level=info msg="Forcibly stopping sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\"" Mar 7 01:14:27.931555 containerd[1515]: time="2026-03-07T01:14:27.930382924Z" level=info msg="TearDown network for sandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" successfully" Mar 7 01:14:27.938678 containerd[1515]: time="2026-03-07T01:14:27.938632854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:14:27.938899 containerd[1515]: time="2026-03-07T01:14:27.938829412Z" level=info msg="RemovePodSandbox \"241639b12af3b6d7b31fad7b62f10d62971aa594df4814aa046b547f5296b71f\" returns successfully" Mar 7 01:14:27.939549 containerd[1515]: time="2026-03-07T01:14:27.939505737Z" level=info msg="StopPodSandbox for \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\"" Mar 7 01:14:27.939661 containerd[1515]: time="2026-03-07T01:14:27.939618806Z" level=info msg="TearDown network for sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" successfully" Mar 7 01:14:27.939661 containerd[1515]: time="2026-03-07T01:14:27.939636556Z" level=info msg="StopPodSandbox for \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" returns successfully" Mar 7 01:14:27.940385 containerd[1515]: time="2026-03-07T01:14:27.940135842Z" level=info msg="RemovePodSandbox for \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\"" Mar 7 01:14:27.940385 containerd[1515]: time="2026-03-07T01:14:27.940175682Z" level=info msg="Forcibly stopping sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\"" Mar 7 01:14:27.940591 containerd[1515]: time="2026-03-07T01:14:27.940385040Z" level=info msg="TearDown network for sandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" successfully" Mar 7 01:14:27.945501 containerd[1515]: time="2026-03-07T01:14:27.945424014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:14:27.946761 containerd[1515]: time="2026-03-07T01:14:27.945533013Z" level=info msg="RemovePodSandbox \"7c7abcfd379eb9cd5931fa999016f68b12e67124d30a832c29cad0afe6149433\" returns successfully" Mar 7 01:14:31.264466 kubelet[2570]: E0307 01:14:31.264378 2570 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-n-5ad0d165ec)"