Apr 13 20:14:01.934191 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:14:01.934207 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:14:01.934216 kernel: BIOS-provided physical RAM map: Apr 13 20:14:01.934221 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:14:01.934225 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 13 20:14:01.934230 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 13 20:14:01.934235 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 13 20:14:01.934239 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 13 20:14:01.934244 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 13 20:14:01.934248 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 13 20:14:01.934253 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 13 20:14:01.934259 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 13 20:14:01.934264 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 13 20:14:01.934271 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 13 20:14:01.934279 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 13 20:14:01.934286 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:14:01.934296 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 13 20:14:01.934303 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 13 20:14:01.934310 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:14:01.934317 kernel: NX (Execute Disable) protection: active Apr 13 20:14:01.934322 kernel: APIC: Static calls initialized Apr 13 20:14:01.934326 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 20:14:01.934331 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Apr 13 20:14:01.934336 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 13 20:14:01.934341 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 13 20:14:01.934345 kernel: SMBIOS 3.0.0 present. Apr 13 20:14:01.934350 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 13 20:14:01.934355 kernel: Hypervisor detected: KVM Apr 13 20:14:01.934362 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:14:01.934367 kernel: kvm-clock: using sched offset of 12682961824 cycles Apr 13 20:14:01.934372 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:14:01.934377 kernel: tsc: Detected 2399.998 MHz processor Apr 13 20:14:01.934382 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:14:01.934387 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:14:01.934391 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 13 20:14:01.934396 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:14:01.934401 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:14:01.934408 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 13 20:14:01.934413 kernel: Using GB pages for direct mapping Apr 13 20:14:01.934418 kernel: Secure boot disabled Apr 13 20:14:01.934426 kernel: ACPI: Early table checksum verification disabled Apr 13 20:14:01.934431 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 13 20:14:01.934436 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 20:14:01.934441 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934448 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934453 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 13 20:14:01.934458 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934463 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934468 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934473 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:14:01.934479 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 20:14:01.934486 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 13 20:14:01.934491 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 13 20:14:01.934496 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 13 20:14:01.934501 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 13 20:14:01.934506 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 13 20:14:01.934511 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 13 20:14:01.934516 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 13 20:14:01.934520 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 13 20:14:01.934526 kernel: No NUMA configuration found Apr 13 20:14:01.934533 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 13 20:14:01.934538 kernel: NODE_DATA(0) allocated [mem 0x179ff8000-0x179ffdfff] Apr 13 20:14:01.934543 kernel: Zone ranges: Apr 13 20:14:01.934548 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:14:01.934553 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:14:01.934558 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:14:01.934563 kernel: Movable zone start for each node Apr 13 20:14:01.934568 kernel: Early memory node ranges Apr 13 20:14:01.934573 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:14:01.934578 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 13 20:14:01.934588 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 13 20:14:01.934595 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 13 20:14:01.934603 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:14:01.934611 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 13 20:14:01.934618 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:14:01.934625 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:14:01.934632 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 13 20:14:01.934640 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:14:01.934647 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 13 20:14:01.934655 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 20:14:01.934660 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:14:01.934665 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:14:01.934670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:14:01.934675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:14:01.934680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:14:01.934685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:14:01.934690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:14:01.934695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:14:01.934702 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:14:01.934707 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:14:01.934712 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:14:01.934718 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:14:01.934722 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 13 20:14:01.934727 kernel: Booting paravirtualized kernel on KVM Apr 13 20:14:01.934733 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:14:01.934738 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:14:01.934743 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:14:01.934750 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:14:01.934755 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:14:01.934760 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 13 20:14:01.934766 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:14:01.934771 kernel: random: crng init done Apr 13 20:14:01.934776 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:14:01.934781 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:14:01.934786 kernel: Fallback order for Node 0: 0 Apr 13 20:14:01.934793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 13 20:14:01.934798 kernel: Policy zone: Normal Apr 13 20:14:01.934803 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:14:01.934808 kernel: software IO TLB: area num 2. Apr 13 20:14:01.934813 kernel: Memory: 3827828K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 263136K reserved, 0K cma-reserved) Apr 13 20:14:01.934818 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:14:01.934823 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:14:01.934828 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:14:01.934833 kernel: Dynamic Preempt: voluntary Apr 13 20:14:01.934840 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:14:01.934846 kernel: rcu: RCU event tracing is enabled. Apr 13 20:14:01.934852 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:14:01.934857 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:14:01.934869 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:14:01.934876 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:14:01.934882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:14:01.934887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:14:01.934892 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:14:01.934897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:14:01.934902 kernel: Console: colour dummy device 80x25 Apr 13 20:14:01.934908 kernel: printk: console [tty0] enabled Apr 13 20:14:01.934915 kernel: printk: console [ttyS0] enabled Apr 13 20:14:01.934920 kernel: ACPI: Core revision 20230628 Apr 13 20:14:01.934926 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:14:01.934931 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:14:01.934936 kernel: x2apic enabled Apr 13 20:14:01.934944 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:14:01.934958 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:14:01.934963 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:14:01.934968 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Apr 13 20:14:01.934974 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:14:01.934979 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:14:01.934984 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:14:01.934989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:14:01.934995 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 13 20:14:01.935002 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:14:01.935008 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:14:01.935013 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:14:01.935018 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 13 20:14:01.935023 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:14:01.935028 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:14:01.935034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:14:01.935039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:14:01.935044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:14:01.935052 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:14:01.935057 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:14:01.935062 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:14:01.935068 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:14:01.935073 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:14:01.935078 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 20:14:01.935083 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 20:14:01.935088 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 20:14:01.935094 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 13 20:14:01.935325 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 13 20:14:01.935332 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:14:01.935337 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:14:01.935343 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:14:01.935348 kernel: landlock: Up and running. Apr 13 20:14:01.935354 kernel: SELinux: Initializing. Apr 13 20:14:01.935362 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:14:01.935370 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:14:01.935379 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 13 20:14:01.935392 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:14:01.935399 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:14:01.935404 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:14:01.935410 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:14:01.935415 kernel: ... version: 0 Apr 13 20:14:01.935420 kernel: ... bit width: 48 Apr 13 20:14:01.935425 kernel: ... generic registers: 6 Apr 13 20:14:01.935430 kernel: ... value mask: 0000ffffffffffff Apr 13 20:14:01.935435 kernel: ... max period: 00007fffffffffff Apr 13 20:14:01.935443 kernel: ... fixed-purpose events: 0 Apr 13 20:14:01.935449 kernel: ... event mask: 000000000000003f Apr 13 20:14:01.935454 kernel: signal: max sigframe size: 3376 Apr 13 20:14:01.935462 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:14:01.935471 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:14:01.935480 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:14:01.935488 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:14:01.935494 kernel: .... node #0, CPUs: #1 Apr 13 20:14:01.935499 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:14:01.935506 kernel: smpboot: Max logical packages: 1 Apr 13 20:14:01.935512 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Apr 13 20:14:01.935517 kernel: devtmpfs: initialized Apr 13 20:14:01.935522 kernel: x86/mm: Memory block size: 128MB Apr 13 20:14:01.935527 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 13 20:14:01.935533 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:14:01.935538 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:14:01.935543 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:14:01.935548 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:14:01.935558 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:14:01.935567 kernel: audit: type=2000 audit(1776111240.703:1): state=initialized audit_enabled=0 res=1 Apr 13 20:14:01.935575 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:14:01.935583 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:14:01.935589 kernel: cpuidle: using governor menu Apr 13 20:14:01.935594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:14:01.935599 kernel: dca service started, version 1.12.1 Apr 13 20:14:01.935604 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 13 20:14:01.935609 kernel: PCI: Using configuration type 1 for base access Apr 13 20:14:01.935617 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:14:01.935622 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:14:01.935627 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:14:01.935632 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:14:01.935637 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:14:01.935642 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:14:01.935650 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:14:01.935659 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:14:01.935667 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:14:01.935678 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:14:01.935683 kernel: ACPI: Interpreter enabled Apr 13 20:14:01.935688 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:14:01.935693 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:14:01.935699 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:14:01.935704 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:14:01.935709 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:14:01.935714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:14:01.938187 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:14:01.938327 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:14:01.938448 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:14:01.938459 kernel: PCI host bridge to bus 0000:00 Apr 13 20:14:01.938577 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:14:01.938682 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:14:01.938786 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:14:01.938894 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 13 20:14:01.939009 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 13 20:14:01.939128 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:14:01.939220 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:14:01.939330 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:14:01.939436 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 13 20:14:01.939532 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 13 20:14:01.939632 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 13 20:14:01.939727 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 13 20:14:01.939825 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:14:01.939944 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 20:14:01.940074 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:14:01.941285 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.941396 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 13 20:14:01.941499 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.941596 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 13 20:14:01.941699 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.941796 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 13 20:14:01.941898 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.942008 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 13 20:14:01.942453 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.942583 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 13 20:14:01.942691 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.942787 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 13 20:14:01.942913 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.943029 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 13 20:14:01.943151 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.943250 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 13 20:14:01.943356 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:14:01.943452 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 13 20:14:01.943553 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:14:01.943693 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:14:01.943801 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:14:01.943896 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 13 20:14:01.944015 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 13 20:14:01.945264 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:14:01.945386 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 13 20:14:01.945498 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:14:01.945622 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 13 20:14:01.945740 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 13 20:14:01.945858 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:14:01.945977 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:14:01.946093 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:14:01.947302 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:14:01.947434 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 20:14:01.947544 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 13 20:14:01.947642 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:14:01.947736 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:14:01.947873 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 20:14:01.947988 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 13 20:14:01.948089 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 13 20:14:01.949975 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:14:01.950128 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:14:01.950326 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:14:01.950474 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 20:14:01.950625 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 13 20:14:01.950741 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:14:01.950851 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:14:01.950974 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 20:14:01.951081 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 13 20:14:01.951201 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 13 20:14:01.951297 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:14:01.951392 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:14:01.951488 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:14:01.951622 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 20:14:01.951748 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 13 20:14:01.951876 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 13 20:14:01.952002 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:14:01.953370 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:14:01.953486 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:14:01.953493 kernel: acpiphp: Slot [0] registered Apr 13 20:14:01.953603 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:14:01.953705 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 13 20:14:01.953805 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 13 20:14:01.953927 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:14:01.954044 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:14:01.954232 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:14:01.954347 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:14:01.954359 kernel: acpiphp: Slot [0-2] registered Apr 13 20:14:01.954475 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:14:01.954585 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:14:01.954694 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:14:01.954707 kernel: acpiphp: Slot [0-3] registered Apr 13 20:14:01.954818 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:14:01.954926 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:14:01.955034 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:14:01.955041 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:14:01.955047 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:14:01.955052 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:14:01.955058 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:14:01.955067 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:14:01.955072 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:14:01.955077 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:14:01.955083 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:14:01.955088 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:14:01.955093 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:14:01.955110 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:14:01.955115 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:14:01.955120 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:14:01.955128 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:14:01.955134 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:14:01.955139 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:14:01.955145 kernel: iommu: Default domain type: Translated Apr 13 20:14:01.955150 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:14:01.955155 kernel: efivars: Registered efivars operations Apr 13 20:14:01.955160 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:14:01.955166 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:14:01.955171 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 13 20:14:01.955179 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 13 20:14:01.955184 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 13 20:14:01.955190 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 13 20:14:01.955288 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:14:01.955383 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:14:01.955477 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:14:01.955484 kernel: vgaarb: loaded Apr 13 20:14:01.955489 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:14:01.955495 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:14:01.955503 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:14:01.955508 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:14:01.955514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:14:01.955519 kernel: pnp: PnP ACPI init Apr 13 20:14:01.955623 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 13 20:14:01.955631 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:14:01.955636 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:14:01.955642 kernel: NET: Registered PF_INET protocol family Apr 13 20:14:01.955663 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:14:01.955670 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:14:01.955676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:14:01.955682 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:14:01.955687 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:14:01.955693 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:14:01.955699 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:14:01.955704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:14:01.955710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:14:01.955721 kernel: NET: Registered PF_XDP protocol family Apr 13 20:14:01.955841 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:14:01.955946 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:14:01.956052 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 20:14:01.956161 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 20:14:01.956257 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 20:14:01.956353 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 20:14:01.956452 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 20:14:01.956550 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 20:14:01.956660 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 13 20:14:01.956770 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:14:01.956895 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:14:01.957019 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:14:01.957162 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:14:01.957293 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:14:01.957393 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:14:01.957489 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:14:01.957585 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:14:01.957681 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:14:01.957776 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:14:01.957876 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:14:01.957983 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:14:01.958078 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:14:01.958200 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:14:01.958296 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:14:01.958390 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:14:01.958494 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 13 20:14:01.958589 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:14:01.958686 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 13 20:14:01.958779 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:14:01.958875 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:14:01.958980 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:14:01.959139 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 13 20:14:01.959259 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:14:01.959359 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:14:01.959455 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:14:01.959550 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 13 20:14:01.959650 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:14:01.959745 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:14:01.959838 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:14:01.959927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:14:01.960031 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:14:01.960208 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 13 20:14:01.960298 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 13 20:14:01.960387 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:14:01.960503 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 13 20:14:01.960598 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:14:01.960697 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 13 20:14:01.960801 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 13 20:14:01.960893 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:14:01.961004 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:14:01.961163 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 13 20:14:01.961275 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:14:01.961390 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 13 20:14:01.961515 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:14:01.961630 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 13 20:14:01.961736 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 13 20:14:01.961841 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:14:01.961964 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 13 20:14:01.962078 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 13 20:14:01.963232 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:14:01.963367 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 13 20:14:01.963477 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 13 20:14:01.963571 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:14:01.963579 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:14:01.963585 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:14:01.963590 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:14:01.963596 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 13 20:14:01.963602 kernel: Initialise system trusted keyrings Apr 13 20:14:01.963611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:14:01.963639 kernel: Key type asymmetric registered Apr 13 20:14:01.963644 kernel: Asymmetric key parser 'x509' registered Apr 13 20:14:01.963650 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:14:01.963656 kernel: io scheduler mq-deadline registered Apr 13 20:14:01.963661 kernel: io scheduler kyber registered Apr 13 20:14:01.963677 kernel: io scheduler bfq registered Apr 13 20:14:01.963798 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 13 20:14:01.963896 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 13 20:14:01.964006 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 13 20:14:01.964115 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 13 20:14:01.964212 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 13 20:14:01.964308 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 13 20:14:01.964404 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 13 20:14:01.964499 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 13 20:14:01.964594 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 13 20:14:01.964689 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 13 20:14:01.964788 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 13 20:14:01.964908 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 13 20:14:01.965016 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 13 20:14:01.966173 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 13 20:14:01.966299 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 13 20:14:01.966418 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 13 20:14:01.966434 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:14:01.966545 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 13 20:14:01.966675 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 13 20:14:01.966684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:14:01.966690 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 13 20:14:01.966695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:14:01.966704 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:14:01.966710 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:14:01.966716 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:14:01.966721 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:14:01.966823 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:14:01.966833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:14:01.966923 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:14:01.967023 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:14:01 UTC (1776111241) Apr 13 20:14:01.968230 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:14:01.968243 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:14:01.968249 kernel: efifb: probing for efifb Apr 13 20:14:01.968255 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 13 20:14:01.968261 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 13 20:14:01.968270 kernel: efifb: scrolling: redraw Apr 13 20:14:01.968276 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:14:01.968281 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:14:01.968287 kernel: fb0: EFI VGA frame buffer device Apr 13 20:14:01.968293 kernel: pstore: Using crash dump compression: deflate Apr 13 20:14:01.968299 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:14:01.968304 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:14:01.968310 kernel: Segment Routing with IPv6 Apr 13 20:14:01.968315 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:14:01.968323 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:14:01.968329 kernel: Key type dns_resolver registered Apr 13 20:14:01.968334 kernel: IPI shorthand broadcast: enabled Apr 13 20:14:01.968340 kernel: sched_clock: Marking stable (1283010193, 216269564)->(1549407370, -50127613) Apr 13 20:14:01.968346 kernel: registered taskstats version 1 Apr 13 20:14:01.968351 kernel: Loading compiled-in X.509 certificates Apr 13 20:14:01.968357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:14:01.968363 kernel: Key type .fscrypt registered Apr 13 20:14:01.968368 kernel: Key type fscrypt-provisioning registered Apr 13 20:14:01.968376 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:14:01.968382 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:14:01.968388 kernel: ima: No architecture policies found Apr 13 20:14:01.968393 kernel: clk: Disabling unused clocks Apr 13 20:14:01.968399 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:14:01.968405 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:14:01.968411 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:14:01.968416 kernel: Run /init as init process Apr 13 20:14:01.968422 kernel: with arguments: Apr 13 20:14:01.968430 kernel: /init Apr 13 20:14:01.968436 kernel: with environment: Apr 13 20:14:01.968441 kernel: HOME=/ Apr 13 20:14:01.968446 kernel: TERM=linux Apr 13 20:14:01.968454 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:14:01.968462 systemd[1]: Detected virtualization kvm. Apr 13 20:14:01.968468 systemd[1]: Detected architecture x86-64. Apr 13 20:14:01.968476 systemd[1]: Running in initrd. Apr 13 20:14:01.968482 systemd[1]: No hostname configured, using default hostname. Apr 13 20:14:01.968488 systemd[1]: Hostname set to . Apr 13 20:14:01.968494 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:14:01.968500 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:14:01.968506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:14:01.968512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:14:01.968519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:14:01.968527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:14:01.968533 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:14:01.968539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:14:01.968546 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:14:01.968555 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:14:01.968561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:14:01.968567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:14:01.968575 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:14:01.968580 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:14:01.968586 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:14:01.968592 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:14:01.968598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:14:01.968604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:14:01.968610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:14:01.968616 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:14:01.968624 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:14:01.968630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:14:01.968636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:14:01.968642 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:14:01.968648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:14:01.968654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:14:01.968660 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:14:01.968666 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:14:01.968672 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:14:01.968680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:14:01.968686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:01.968711 systemd-journald[189]: Collecting audit messages is disabled. Apr 13 20:14:01.968737 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:14:01.968751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:14:01.968759 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:14:01.968768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:01.968777 systemd-journald[189]: Journal started Apr 13 20:14:01.968798 systemd-journald[189]: Runtime Journal (/run/log/journal/dc3f9f246e9844d08902f3ddfd5b5cee) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:14:01.971797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:14:01.950595 systemd-modules-load[190]: Inserted module 'overlay' Apr 13 20:14:01.982034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:14:01.982065 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:14:01.984121 kernel: Bridge firewalling registered Apr 13 20:14:01.984685 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 13 20:14:01.989137 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:14:01.992144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:14:02.003274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:14:02.006217 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:14:02.006843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:14:02.008698 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:14:02.012226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:14:02.020221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:14:02.021216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:14:02.022278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:14:02.033219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:14:02.034704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:14:02.036542 dracut-cmdline[215]: dracut-dracut-053 Apr 13 20:14:02.036542 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:14:02.057438 systemd-resolved[222]: Positive Trust Anchors: Apr 13 20:14:02.058052 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:14:02.058495 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:14:02.062358 systemd-resolved[222]: Defaulting to hostname 'linux'. Apr 13 20:14:02.063675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:14:02.064177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:14:02.095143 kernel: SCSI subsystem initialized Apr 13 20:14:02.103125 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:14:02.113131 kernel: iscsi: registered transport (tcp) Apr 13 20:14:02.129417 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:14:02.129488 kernel: QLogic iSCSI HBA Driver Apr 13 20:14:02.166768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:14:02.171247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:14:02.195365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:14:02.195435 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:14:02.199125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:14:02.237143 kernel: raid6: avx512x4 gen() 41690 MB/s Apr 13 20:14:02.255138 kernel: raid6: avx512x2 gen() 48222 MB/s Apr 13 20:14:02.273134 kernel: raid6: avx512x1 gen() 47245 MB/s Apr 13 20:14:02.291132 kernel: raid6: avx2x4 gen() 51988 MB/s Apr 13 20:14:02.309134 kernel: raid6: avx2x2 gen() 54892 MB/s Apr 13 20:14:02.328282 kernel: raid6: avx2x1 gen() 44110 MB/s Apr 13 20:14:02.328384 kernel: raid6: using algorithm avx2x2 gen() 54892 MB/s Apr 13 20:14:02.348370 kernel: raid6: .... xor() 36206 MB/s, rmw enabled Apr 13 20:14:02.348437 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:14:02.365205 kernel: xor: automatically using best checksumming function avx Apr 13 20:14:02.476141 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:14:02.486265 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:14:02.492292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:14:02.506655 systemd-udevd[406]: Using default interface naming scheme 'v255'. Apr 13 20:14:02.511577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:14:02.519236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:14:02.530881 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 13 20:14:02.558773 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:14:02.563280 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:14:02.639368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:14:02.645135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:14:02.675913 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:14:02.677687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:14:02.678547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:14:02.679365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:14:02.688315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:14:02.696450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:14:02.724827 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:14:02.727831 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:14:02.735638 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:14:02.756155 kernel: ACPI: bus type USB registered Apr 13 20:14:02.764593 kernel: usbcore: registered new interface driver usbfs Apr 13 20:14:02.769171 kernel: usbcore: registered new interface driver hub Apr 13 20:14:02.771183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:14:02.771683 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:14:02.775910 kernel: usbcore: registered new device driver usb Apr 13 20:14:02.775940 kernel: libata version 3.00 loaded. Apr 13 20:14:02.774744 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:14:02.775127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:14:02.775249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:02.775595 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:02.785463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:02.787205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:14:02.787299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:02.790294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:02.797546 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:14:02.802128 kernel: AES CTR mode by8 optimization enabled Apr 13 20:14:02.825851 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:14:02.826063 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:14:02.826322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:02.834146 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:14:02.838412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:14:02.845021 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 20:14:02.845214 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:14:02.845341 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:14:02.848648 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:14:02.848838 kernel: scsi host1: ahci Apr 13 20:14:02.848860 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 13 20:14:02.854893 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 20:14:02.855083 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:14:02.857695 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:14:02.857874 kernel: scsi host2: ahci Apr 13 20:14:02.858009 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:14:02.862158 kernel: scsi host3: ahci Apr 13 20:14:02.866363 kernel: scsi host4: ahci Apr 13 20:14:02.866529 kernel: scsi host5: ahci Apr 13 20:14:02.869001 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:14:02.869191 kernel: scsi host6: ahci Apr 13 20:14:02.873211 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Apr 13 20:14:02.873238 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 20:14:02.873399 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Apr 13 20:14:02.873407 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 20:14:02.873525 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Apr 13 20:14:02.875518 kernel: hub 1-0:1.0: USB hub found Apr 13 20:14:02.875674 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Apr 13 20:14:02.876116 kernel: hub 1-0:1.0: 4 ports detected Apr 13 20:14:02.876275 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Apr 13 20:14:02.880709 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 20:14:02.880883 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Apr 13 20:14:02.880892 kernel: hub 2-0:1.0: USB hub found Apr 13 20:14:02.881036 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:14:02.886167 kernel: hub 2-0:1.0: 4 ports detected Apr 13 20:14:02.886352 kernel: GPT:17805311 != 160006143 Apr 13 20:14:02.886369 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:14:02.886377 kernel: GPT:17805311 != 160006143 Apr 13 20:14:02.886384 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:14:02.886391 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:14:02.905338 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:14:02.906340 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:14:03.119272 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 20:14:03.201318 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:14:03.201428 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:14:03.202170 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:14:03.213125 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 20:14:03.213182 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 20:14:03.216572 kernel: ata1.00: applying bridge limits Apr 13 20:14:03.224067 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:14:03.229151 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:14:03.229205 kernel: ata1.00: configured for UDMA/100 Apr 13 20:14:03.233319 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 20:14:03.296527 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 20:14:03.296597 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 20:14:03.296786 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 20:14:03.300699 kernel: usbcore: registered new interface driver usbhid Apr 13 20:14:03.300723 kernel: usbhid: USB HID core driver Apr 13 20:14:03.314117 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 13 20:14:03.314142 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 13 20:14:03.316641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:14:03.323090 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Apr 13 20:14:03.323124 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 20:14:03.323287 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (470) Apr 13 20:14:03.329061 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:14:03.339765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:14:03.343138 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:14:03.343809 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:14:03.350245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:14:03.354808 disk-uuid[585]: Primary Header is updated. Apr 13 20:14:03.354808 disk-uuid[585]: Secondary Entries is updated. Apr 13 20:14:03.354808 disk-uuid[585]: Secondary Header is updated. Apr 13 20:14:03.362258 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:14:03.367120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:14:04.378162 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:14:04.379501 disk-uuid[586]: The operation has completed successfully. Apr 13 20:14:04.464767 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:14:04.464876 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:14:04.471255 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:14:04.495143 sh[604]: Success Apr 13 20:14:04.515138 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:14:04.554586 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:14:04.560810 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:14:04.562436 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:14:04.586779 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:14:04.586815 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:14:04.586824 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:14:04.591364 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:14:04.591391 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:14:04.605154 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:14:04.607375 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:14:04.608558 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:14:04.619226 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:14:04.622238 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:14:04.648767 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:14:04.648811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:14:04.648825 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:14:04.658207 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:14:04.658263 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:14:04.669235 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:14:04.674128 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:14:04.679628 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:14:04.687997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:14:04.719932 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:14:04.729237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:14:04.752556 systemd-networkd[789]: lo: Link UP Apr 13 20:14:04.753185 systemd-networkd[789]: lo: Gained carrier Apr 13 20:14:04.754308 ignition[738]: Ignition 2.19.0 Apr 13 20:14:04.754314 ignition[738]: Stage: fetch-offline Apr 13 20:14:04.754347 ignition[738]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:04.754356 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:04.754436 ignition[738]: parsed url from cmdline: "" Apr 13 20:14:04.756460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:14:04.754440 ignition[738]: no config URL provided Apr 13 20:14:04.754445 ignition[738]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:14:04.757544 systemd-networkd[789]: Enumeration completed Apr 13 20:14:04.754452 ignition[738]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:14:04.758919 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:14:04.754457 ignition[738]: failed to fetch config: resource requires networking Apr 13 20:14:04.759045 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:04.754705 ignition[738]: Ignition finished successfully Apr 13 20:14:04.759049 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:14:04.760531 systemd-networkd[789]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:04.760535 systemd-networkd[789]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:14:04.761126 systemd[1]: Reached target network.target - Network. Apr 13 20:14:04.762654 systemd-networkd[789]: eth0: Link UP Apr 13 20:14:04.762659 systemd-networkd[789]: eth0: Gained carrier Apr 13 20:14:04.762667 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:04.766901 systemd-networkd[789]: eth1: Link UP Apr 13 20:14:04.766905 systemd-networkd[789]: eth1: Gained carrier Apr 13 20:14:04.766913 systemd-networkd[789]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:04.767300 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:14:04.779672 ignition[797]: Ignition 2.19.0 Apr 13 20:14:04.779687 ignition[797]: Stage: fetch Apr 13 20:14:04.779851 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:04.779860 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:04.779971 ignition[797]: parsed url from cmdline: "" Apr 13 20:14:04.779975 ignition[797]: no config URL provided Apr 13 20:14:04.779980 ignition[797]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:14:04.779989 ignition[797]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:14:04.780005 ignition[797]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 20:14:04.780161 ignition[797]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:14:04.807149 systemd-networkd[789]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:14:04.828148 systemd-networkd[789]: eth0: DHCPv4 address 204.168.245.167/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:14:04.981241 ignition[797]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 20:14:04.990095 ignition[797]: GET result: OK Apr 13 20:14:04.990250 ignition[797]: parsing config with SHA512: fa3fd43969981fe84d2ca4a14eb1211d7a63521e8cdb5604dce0da8ac723f3c51e13e54bc07c15db6f371fff634d38d240397136e7eadbf54f77318047a1fd5d Apr 13 20:14:04.996532 unknown[797]: fetched base config from "system" Apr 13 20:14:04.996553 unknown[797]: fetched base config from "system" Apr 13 20:14:04.997267 ignition[797]: fetch: fetch complete Apr 13 20:14:04.996565 unknown[797]: fetched user config from "hetzner" Apr 13 20:14:04.997278 ignition[797]: fetch: fetch passed Apr 13 20:14:04.997354 ignition[797]: Ignition finished successfully Apr 13 20:14:05.003975 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:14:05.012395 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:14:05.048065 ignition[806]: Ignition 2.19.0 Apr 13 20:14:05.048086 ignition[806]: Stage: kargs Apr 13 20:14:05.048387 ignition[806]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:05.048410 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:05.052757 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:14:05.050027 ignition[806]: kargs: kargs passed Apr 13 20:14:05.050198 ignition[806]: Ignition finished successfully Apr 13 20:14:05.059402 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:14:05.097300 ignition[812]: Ignition 2.19.0 Apr 13 20:14:05.097324 ignition[812]: Stage: disks Apr 13 20:14:05.097616 ignition[812]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:05.097639 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:05.099066 ignition[812]: disks: disks passed Apr 13 20:14:05.099247 ignition[812]: Ignition finished successfully Apr 13 20:14:05.101714 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:14:05.103856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:14:05.105792 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:14:05.106805 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:14:05.108144 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:14:05.109380 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:14:05.115340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:14:05.141273 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:14:05.147936 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:14:05.151280 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:14:05.253136 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:14:05.254379 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:14:05.256289 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:14:05.264189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:14:05.267185 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:14:05.274384 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 20:14:05.279189 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:14:05.279869 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:14:05.281661 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:14:05.290274 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:14:05.298130 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (829) Apr 13 20:14:05.307180 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:14:05.307208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:14:05.307218 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:14:05.316547 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:14:05.316571 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:14:05.323088 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:14:05.343987 coreos-metadata[831]: Apr 13 20:14:05.343 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 20:14:05.345174 coreos-metadata[831]: Apr 13 20:14:05.344 INFO Fetch successful Apr 13 20:14:05.345174 coreos-metadata[831]: Apr 13 20:14:05.345 INFO wrote hostname ci-4081-3-7-7-b4460b9a5e to /sysroot/etc/hostname Apr 13 20:14:05.347434 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:14:05.348291 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:14:05.353078 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:14:05.357982 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:14:05.362709 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:14:05.444451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:14:05.463204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:14:05.466264 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:14:05.475125 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:14:05.496143 ignition[949]: INFO : Ignition 2.19.0 Apr 13 20:14:05.496143 ignition[949]: INFO : Stage: mount Apr 13 20:14:05.498072 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:05.498072 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:05.498072 ignition[949]: INFO : mount: mount passed Apr 13 20:14:05.498072 ignition[949]: INFO : Ignition finished successfully Apr 13 20:14:05.498259 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:14:05.499929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:14:05.504183 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:14:05.582482 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:14:05.590293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:14:05.602041 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (961) Apr 13 20:14:05.602071 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:14:05.605790 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:14:05.605810 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:14:05.613596 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:14:05.613619 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:14:05.615777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:14:05.633310 ignition[977]: INFO : Ignition 2.19.0 Apr 13 20:14:05.633310 ignition[977]: INFO : Stage: files Apr 13 20:14:05.634172 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:05.634172 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:05.634736 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:14:05.635455 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:14:05.635455 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:14:05.638318 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:14:05.638857 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:14:05.639557 unknown[977]: wrote ssh authorized keys file for user: core Apr 13 20:14:05.640084 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:14:05.641582 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:14:05.642278 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:14:05.918361 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:14:06.145551 systemd-networkd[789]: eth0: Gained IPv6LL Apr 13 20:14:06.229897 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:14:06.229897 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:14:06.232256 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 13 20:14:06.465754 systemd-networkd[789]: eth1: Gained IPv6LL Apr 13 20:14:06.516520 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:14:06.621497 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:14:06.621497 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:14:06.623489 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:14:06.969173 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 20:14:07.224302 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:14:07.224302 ignition[977]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:14:07.227361 ignition[977]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:14:07.227361 ignition[977]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:14:07.227361 ignition[977]: INFO : files: files passed Apr 13 20:14:07.227361 ignition[977]: INFO : Ignition finished successfully Apr 13 20:14:07.228329 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:14:07.238653 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:14:07.243209 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:14:07.244873 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:14:07.245044 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:14:07.255266 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:14:07.255266 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:14:07.257137 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:14:07.258557 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:14:07.259317 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:14:07.266292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:14:07.286031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:14:07.286139 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:14:07.287084 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:14:07.287714 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:14:07.288547 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:14:07.290213 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:14:07.301695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:14:07.308238 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:14:07.314956 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:14:07.315433 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:14:07.315866 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:14:07.316697 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:14:07.316773 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:14:07.317798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:14:07.318558 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:14:07.319272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:14:07.319956 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:14:07.320660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:14:07.321368 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:14:07.322047 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:14:07.322750 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:14:07.323460 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:14:07.324169 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:14:07.324852 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:14:07.324927 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:14:07.325935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:14:07.326656 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:14:07.327336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:14:07.327406 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:14:07.328081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:14:07.328182 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:14:07.329114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:14:07.329202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:14:07.329837 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:14:07.329905 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:14:07.330537 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 20:14:07.330615 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:14:07.340233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:14:07.343251 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:14:07.343801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:14:07.343914 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:14:07.344581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:14:07.344690 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:14:07.348007 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:14:07.348111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:14:07.354481 ignition[1031]: INFO : Ignition 2.19.0 Apr 13 20:14:07.355783 ignition[1031]: INFO : Stage: umount Apr 13 20:14:07.355783 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:14:07.355783 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:14:07.355783 ignition[1031]: INFO : umount: umount passed Apr 13 20:14:07.355783 ignition[1031]: INFO : Ignition finished successfully Apr 13 20:14:07.357775 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:14:07.357868 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:14:07.359801 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:14:07.359875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:14:07.361123 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:14:07.361163 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:14:07.361506 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:14:07.361538 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:14:07.362194 systemd[1]: Stopped target network.target - Network. Apr 13 20:14:07.362870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:14:07.362911 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:14:07.363896 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:14:07.364263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:14:07.364588 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:14:07.365135 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:14:07.365544 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:14:07.365857 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:14:07.365894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:14:07.366243 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:14:07.366276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:14:07.366573 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:14:07.366607 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:14:07.366912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:14:07.366942 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:14:07.369278 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:14:07.371820 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:14:07.372253 systemd-networkd[789]: eth0: DHCPv6 lease lost Apr 13 20:14:07.374165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:14:07.374702 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:14:07.374788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:14:07.375725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:14:07.375820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:14:07.376160 systemd-networkd[789]: eth1: DHCPv6 lease lost Apr 13 20:14:07.380995 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:14:07.381139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:14:07.382783 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:14:07.382902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:14:07.384061 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:14:07.384232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:14:07.389160 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:14:07.389698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:14:07.389740 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:14:07.391322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:14:07.391362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:14:07.391881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:14:07.391914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:14:07.392489 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:14:07.392524 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:14:07.393200 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:14:07.406192 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:14:07.406295 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:14:07.409430 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:14:07.409567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:14:07.410491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:14:07.410556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:14:07.411218 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:14:07.411250 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:14:07.411811 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:14:07.411847 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:14:07.412831 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:14:07.412867 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:14:07.413822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:14:07.413860 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:14:07.419195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:14:07.419525 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:14:07.419565 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:14:07.419919 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:14:07.419950 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:14:07.420330 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:14:07.420363 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:14:07.420710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:14:07.420741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:07.424798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:14:07.424883 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:14:07.426376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:14:07.436209 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:14:07.441251 systemd[1]: Switching root. Apr 13 20:14:07.490588 systemd-journald[189]: Journal stopped Apr 13 20:14:08.578865 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 13 20:14:08.578953 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:14:08.578982 kernel: SELinux: policy capability open_perms=1 Apr 13 20:14:08.578994 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:14:08.579006 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:14:08.579021 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:14:08.579038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:14:08.579054 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:14:08.579069 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:14:08.579082 kernel: audit: type=1403 audit(1776111247.658:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:14:08.579095 systemd[1]: Successfully loaded SELinux policy in 45.386ms. Apr 13 20:14:08.579133 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.379ms. Apr 13 20:14:08.579147 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:14:08.579160 systemd[1]: Detected virtualization kvm. Apr 13 20:14:08.579173 systemd[1]: Detected architecture x86-64. Apr 13 20:14:08.579186 systemd[1]: Detected first boot. Apr 13 20:14:08.579198 systemd[1]: Hostname set to . Apr 13 20:14:08.579210 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:14:08.579223 zram_generator::config[1073]: No configuration found. Apr 13 20:14:08.579239 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:14:08.579255 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:14:08.579267 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:14:08.579280 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:14:08.579294 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:14:08.579327 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:14:08.579339 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:14:08.579352 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:14:08.579370 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:14:08.579387 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:14:08.579403 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:14:08.579416 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:14:08.579433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:14:08.579446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:14:08.579458 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:14:08.579471 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:14:08.579484 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:14:08.579512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:14:08.579524 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:14:08.579537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:14:08.579549 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:14:08.579562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:14:08.579575 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:14:08.579590 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:14:08.579604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:14:08.579617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:14:08.579629 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:14:08.579642 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:14:08.579655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:14:08.579670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:14:08.579682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:14:08.579695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:14:08.579708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:14:08.579723 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:14:08.579736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:14:08.579749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:14:08.579762 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:14:08.579775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:08.579787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:14:08.579799 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:14:08.579811 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:14:08.579827 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:14:08.579840 systemd[1]: Reached target machines.target - Containers. Apr 13 20:14:08.579853 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:14:08.579866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:14:08.579879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:14:08.579892 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:14:08.579904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:14:08.579917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:14:08.579929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:14:08.579947 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:14:08.579962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:14:08.579985 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:14:08.579998 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:14:08.580010 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:14:08.580023 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:14:08.580035 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:14:08.580051 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:14:08.580064 kernel: fuse: init (API version 7.39) Apr 13 20:14:08.580076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:14:08.580089 kernel: loop: module loaded Apr 13 20:14:08.580126 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:14:08.580140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:14:08.580152 kernel: ACPI: bus type drm_connector registered Apr 13 20:14:08.580164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:14:08.580177 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:14:08.580193 systemd[1]: Stopped verity-setup.service. Apr 13 20:14:08.580206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:08.580219 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:14:08.580231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:14:08.580244 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:14:08.580259 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:14:08.580294 systemd-journald[1156]: Collecting audit messages is disabled. Apr 13 20:14:08.580320 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:14:08.580333 systemd-journald[1156]: Journal started Apr 13 20:14:08.580356 systemd-journald[1156]: Runtime Journal (/run/log/journal/dc3f9f246e9844d08902f3ddfd5b5cee) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:14:08.236666 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:14:08.263330 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:14:08.263893 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:14:08.583120 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:14:08.584243 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:14:08.585035 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:14:08.585827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:14:08.586595 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:14:08.586861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:14:08.587637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:14:08.587861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:14:08.588685 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:14:08.588897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:14:08.589648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:14:08.589862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:14:08.590762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:14:08.591069 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:14:08.592170 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:14:08.592422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:14:08.593331 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:14:08.594012 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:14:08.594666 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:14:08.608864 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:14:08.616314 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:14:08.622072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:14:08.622572 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:14:08.622643 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:14:08.623934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:14:08.628253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:14:08.631238 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:14:08.631721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:14:08.636446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:14:08.638240 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:14:08.638617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:14:08.644257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:14:08.644824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:14:08.646707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:14:08.649833 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:14:08.652681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:14:08.656545 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:14:08.656993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:14:08.658822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:14:08.671891 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:14:08.672571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:14:08.674244 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:14:08.698134 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 20:14:08.698232 systemd-journald[1156]: Time spent on flushing to /var/log/journal/dc3f9f246e9844d08902f3ddfd5b5cee is 45.990ms for 1188 entries. Apr 13 20:14:08.698232 systemd-journald[1156]: System Journal (/var/log/journal/dc3f9f246e9844d08902f3ddfd5b5cee) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:14:08.774175 systemd-journald[1156]: Received client request to flush runtime journal. Apr 13 20:14:08.774216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:14:08.774227 kernel: loop1: detected capacity change from 0 to 219192 Apr 13 20:14:08.705365 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:14:08.707698 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:14:08.730923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:14:08.756248 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 13 20:14:08.756259 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 13 20:14:08.762075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:14:08.775769 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:14:08.777925 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:14:08.801525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:14:08.805304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:14:08.826756 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:14:08.833122 kernel: loop2: detected capacity change from 0 to 8 Apr 13 20:14:08.833749 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:14:08.840313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:14:08.855730 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 13 20:14:08.856015 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 13 20:14:08.862388 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 20:14:08.861704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:14:08.905138 kernel: loop4: detected capacity change from 0 to 142488 Apr 13 20:14:08.923127 kernel: loop5: detected capacity change from 0 to 219192 Apr 13 20:14:08.942196 kernel: loop6: detected capacity change from 0 to 8 Apr 13 20:14:08.947190 kernel: loop7: detected capacity change from 0 to 140768 Apr 13 20:14:08.964580 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 20:14:08.966553 (sd-merge)[1222]: Merged extensions into '/usr'. Apr 13 20:14:08.973289 systemd[1]: Reloading requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:14:08.973363 systemd[1]: Reloading... Apr 13 20:14:09.021129 zram_generator::config[1245]: No configuration found. Apr 13 20:14:09.185164 ldconfig[1189]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:14:09.197798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:14:09.235681 systemd[1]: Reloading finished in 261 ms. Apr 13 20:14:09.276851 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:14:09.277617 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:14:09.278455 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:14:09.294447 systemd[1]: Starting ensure-sysext.service... Apr 13 20:14:09.296274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:14:09.301063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:14:09.305202 systemd[1]: Reloading requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:14:09.305214 systemd[1]: Reloading... Apr 13 20:14:09.330088 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:14:09.332635 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:14:09.333520 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:14:09.335425 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Apr 13 20:14:09.335524 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Apr 13 20:14:09.336770 systemd-udevd[1294]: Using default interface naming scheme 'v255'. Apr 13 20:14:09.340294 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:14:09.341204 systemd-tmpfiles[1293]: Skipping /boot Apr 13 20:14:09.360947 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:14:09.361039 systemd-tmpfiles[1293]: Skipping /boot Apr 13 20:14:09.383124 zram_generator::config[1323]: No configuration found. Apr 13 20:14:09.528123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:14:09.551880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:14:09.586192 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:14:09.596841 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:14:09.597520 systemd[1]: Reloading finished in 291 ms. Apr 13 20:14:09.617126 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:14:09.618979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:14:09.622660 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 20:14:09.622868 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:14:09.623011 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:14:09.625632 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:14:09.634477 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:14:09.628001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:14:09.638303 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 20:14:09.645805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:09.651136 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:14:09.653945 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:14:09.662166 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:14:09.662616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:14:09.664168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:14:09.668878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:14:09.671186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:14:09.672259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:14:09.679303 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:14:09.683275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:14:09.690280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:14:09.692734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:14:09.693554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:09.695313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:14:09.696395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:14:09.737198 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1331) Apr 13 20:14:09.739302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:14:09.740455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:14:09.742949 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:14:09.743092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:14:09.780815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:14:09.791902 systemd[1]: Finished ensure-sysext.service. Apr 13 20:14:09.800463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:14:09.804151 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 13 20:14:09.802859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:09.803015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:14:09.806077 kernel: Console: switching to colour dummy device 80x25 Apr 13 20:14:09.809200 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 13 20:14:09.809787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:14:09.810670 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 20:14:09.810702 kernel: [drm] features: -context_init Apr 13 20:14:09.812984 augenrules[1436]: No rules Apr 13 20:14:09.813561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:14:09.815248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:14:09.819216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:14:09.819374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:14:09.820279 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:14:09.823114 kernel: [drm] number of scanouts: 1 Apr 13 20:14:09.824251 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:14:09.825376 kernel: [drm] number of cap sets: 0 Apr 13 20:14:09.835166 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:14:09.838194 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 20:14:09.836652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:09.839197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:14:09.839794 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:14:09.840184 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:14:09.840478 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:14:09.840829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:14:09.840961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:14:09.847314 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 13 20:14:09.847346 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:14:09.854119 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 20:14:09.865499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:14:09.865672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:14:09.866180 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:14:09.866392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:14:09.867841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:14:09.873822 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:14:09.874364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:14:09.876665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:14:09.876855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:14:09.886276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:14:09.886353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:14:09.889780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:14:09.889965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:09.898230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:14:09.901673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:14:09.906673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:14:09.929403 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:14:09.937236 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:14:09.962329 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:14:09.993264 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:14:09.993551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:14:10.000286 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:14:10.006387 systemd-networkd[1409]: lo: Link UP Apr 13 20:14:10.006397 systemd-networkd[1409]: lo: Gained carrier Apr 13 20:14:10.010943 systemd-networkd[1409]: Enumeration completed Apr 13 20:14:10.011040 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:14:10.012846 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:10.012854 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:14:10.013807 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:10.013811 systemd-networkd[1409]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:14:10.015814 systemd-networkd[1409]: eth0: Link UP Apr 13 20:14:10.015823 systemd-networkd[1409]: eth0: Gained carrier Apr 13 20:14:10.015833 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:10.020334 systemd-networkd[1409]: eth1: Link UP Apr 13 20:14:10.020345 systemd-networkd[1409]: eth1: Gained carrier Apr 13 20:14:10.020356 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:14:10.021237 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:14:10.022330 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:14:10.022427 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:14:10.025722 systemd-resolved[1410]: Positive Trust Anchors: Apr 13 20:14:10.025734 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:14:10.028414 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:14:10.025756 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:14:10.029318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:14:10.034946 systemd-resolved[1410]: Using system hostname 'ci-4081-3-7-7-b4460b9a5e'. Apr 13 20:14:10.036752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:14:10.036894 systemd[1]: Reached target network.target - Network. Apr 13 20:14:10.036954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:14:10.037028 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:14:10.037558 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:14:10.037929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:14:10.041290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:14:10.042435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:14:10.042887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:14:10.044388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:14:10.044416 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:14:10.045199 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:14:10.046569 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:14:10.049151 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:14:10.063053 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:14:10.065194 systemd-networkd[1409]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:14:10.065221 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:14:10.066741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:14:10.067792 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:14:10.068203 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Apr 13 20:14:10.069831 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:14:10.070615 systemd-networkd[1409]: eth0: DHCPv4 address 204.168.245.167/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:14:10.071064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:14:10.073514 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:14:10.084216 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:14:10.086771 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:14:10.099254 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:14:10.107687 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:14:10.112669 coreos-metadata[1487]: Apr 13 20:14:10.112 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 20:14:10.114302 coreos-metadata[1487]: Apr 13 20:14:10.114 INFO Fetch successful Apr 13 20:14:10.114302 coreos-metadata[1487]: Apr 13 20:14:10.114 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 20:14:10.114330 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:14:10.115149 coreos-metadata[1487]: Apr 13 20:14:10.114 INFO Fetch successful Apr 13 20:14:10.115711 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:14:10.124847 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:14:10.130216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:14:10.137366 jq[1491]: false Apr 13 20:14:10.135280 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 20:14:10.148242 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:14:10.152263 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:14:10.165324 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:14:10.167077 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:14:10.167983 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:14:10.176238 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:14:10.182205 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:14:10.186744 dbus-daemon[1488]: [system] SELinux support is enabled Apr 13 20:14:10.189335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:14:10.196449 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:14:10.203259 extend-filesystems[1492]: Found loop4 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found loop5 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found loop6 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found loop7 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda1 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda2 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda3 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found usr Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda4 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda6 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda7 Apr 13 20:14:10.203259 extend-filesystems[1492]: Found sda9 Apr 13 20:14:10.203259 extend-filesystems[1492]: Checking size of /dev/sda9 Apr 13 20:14:10.196625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:14:10.260184 update_engine[1507]: I20260413 20:14:10.210448 1507 main.cc:92] Flatcar Update Engine starting Apr 13 20:14:10.260184 update_engine[1507]: I20260413 20:14:10.212304 1507 update_check_scheduler.cc:74] Next update check in 9m38s Apr 13 20:14:10.196917 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:14:10.260439 jq[1508]: true Apr 13 20:14:10.197079 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:14:10.205493 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:14:10.264436 extend-filesystems[1492]: Resized partition /dev/sda9 Apr 13 20:14:10.205661 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:14:10.270386 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:14:10.773358 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 13 20:14:10.236096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:14:10.774739 tar[1511]: linux-amd64/LICENSE Apr 13 20:14:10.774739 tar[1511]: linux-amd64/helm Apr 13 20:14:10.241630 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:14:10.241650 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:14:10.241990 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:14:10.242003 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:14:10.242438 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:14:10.250615 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:14:10.255779 systemd-logind[1506]: New seat seat0. Apr 13 20:14:10.264647 systemd-logind[1506]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:14:10.264677 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:14:10.265512 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:14:10.759632 systemd-timesyncd[1445]: Contacted time server 141.144.246.224:123 (0.flatcar.pool.ntp.org). Apr 13 20:14:10.759680 systemd-timesyncd[1445]: Initial clock synchronization to Mon 2026-04-13 20:14:10.759514 UTC. Apr 13 20:14:10.759723 systemd-resolved[1410]: Clock change detected. Flushing caches. Apr 13 20:14:10.780279 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:14:10.790427 jq[1526]: true Apr 13 20:14:10.853913 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:14:10.860556 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:14:10.921798 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:14:10.934498 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1333) Apr 13 20:14:10.948990 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:14:10.957852 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:14:10.959591 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:14:10.966946 systemd[1]: Started sshd@0-204.168.245.167:22-20.229.252.112:59748.service - OpenSSH per-connection server daemon (20.229.252.112:59748). Apr 13 20:14:10.970533 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:14:10.991617 systemd[1]: Starting sshkeys.service... Apr 13 20:14:11.005773 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:14:11.017716 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:14:11.018460 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:14:11.029648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:14:11.036955 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:14:11.047038 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:14:11.047983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:14:11.056100 containerd[1528]: time="2026-04-13T20:14:11.056041372Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:14:11.059225 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:14:11.068737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:14:11.069306 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:14:11.085002 containerd[1528]: time="2026-04-13T20:14:11.084961236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.085742 coreos-metadata[1588]: Apr 13 20:14:11.085 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 20:14:11.088111 coreos-metadata[1588]: Apr 13 20:14:11.087 INFO Fetch successful Apr 13 20:14:11.089744 containerd[1528]: time="2026-04-13T20:14:11.089698300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:14:11.089744 containerd[1528]: time="2026-04-13T20:14:11.089741690Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:14:11.089788 containerd[1528]: time="2026-04-13T20:14:11.089760770Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:14:11.092355 containerd[1528]: time="2026-04-13T20:14:11.090001200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:14:11.092355 containerd[1528]: time="2026-04-13T20:14:11.090016770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092355 containerd[1528]: time="2026-04-13T20:14:11.092249762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092355 containerd[1528]: time="2026-04-13T20:14:11.092265692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092507 containerd[1528]: time="2026-04-13T20:14:11.092468992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092507 containerd[1528]: time="2026-04-13T20:14:11.092491922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092507 containerd[1528]: time="2026-04-13T20:14:11.092502382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092547 containerd[1528]: time="2026-04-13T20:14:11.092510132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092629 containerd[1528]: time="2026-04-13T20:14:11.092597593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092844 containerd[1528]: time="2026-04-13T20:14:11.092822313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092939 containerd[1528]: time="2026-04-13T20:14:11.092917283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:14:11.092939 containerd[1528]: time="2026-04-13T20:14:11.092931763Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:14:11.093019 containerd[1528]: time="2026-04-13T20:14:11.093001983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:14:11.093134 containerd[1528]: time="2026-04-13T20:14:11.093043513Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:14:11.098211 unknown[1588]: wrote ssh authorized keys file for user: core Apr 13 20:14:11.124273 containerd[1528]: time="2026-04-13T20:14:11.124129989Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:14:11.124273 containerd[1528]: time="2026-04-13T20:14:11.124182659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:14:11.124273 containerd[1528]: time="2026-04-13T20:14:11.124196059Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:14:11.124273 containerd[1528]: time="2026-04-13T20:14:11.124222249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:14:11.124273 containerd[1528]: time="2026-04-13T20:14:11.124234719Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:14:11.124437 containerd[1528]: time="2026-04-13T20:14:11.124370179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124542789Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124631089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124641509Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124650709Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124662499Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124672099Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124681229Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124690859Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124701929Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124717 containerd[1528]: time="2026-04-13T20:14:11.124712039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124721659Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124730439Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124745279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124755509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124764699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124775149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124794909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124804209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124812449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124821379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124830819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124847799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.124855 containerd[1528]: time="2026-04-13T20:14:11.124856059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124864059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124872359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124882789Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124897379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124905589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124913939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124948839Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124960979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124968609Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124977050Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124983640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.124995520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:14:11.125010 containerd[1528]: time="2026-04-13T20:14:11.125007460Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:14:11.125168 containerd[1528]: time="2026-04-13T20:14:11.125017940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:14:11.125942 containerd[1528]: time="2026-04-13T20:14:11.125193980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:14:11.125942 containerd[1528]: time="2026-04-13T20:14:11.125234850Z" level=info msg="Connect containerd service" Apr 13 20:14:11.125942 containerd[1528]: time="2026-04-13T20:14:11.125264870Z" level=info msg="using legacy CRI server" Apr 13 20:14:11.125942 containerd[1528]: time="2026-04-13T20:14:11.125269640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:14:11.125942 containerd[1528]: time="2026-04-13T20:14:11.125349040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.125948940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126097170Z" level=info msg="Start subscribing containerd event" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126139070Z" level=info msg="Start recovering state" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126195041Z" level=info msg="Start event monitor" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126204261Z" level=info msg="Start snapshots syncer" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126210781Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126216661Z" level=info msg="Start streaming server" Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126439831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.126482211Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:14:11.131199 containerd[1528]: time="2026-04-13T20:14:11.127559062Z" level=info msg="containerd successfully booted in 0.073168s" Apr 13 20:14:11.126602 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:14:11.138436 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 13 20:14:11.161700 extend-filesystems[1533]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:14:11.161700 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:14:11.161700 extend-filesystems[1533]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 13 20:14:11.162871 extend-filesystems[1492]: Resized filesystem in /dev/sda9 Apr 13 20:14:11.162871 extend-filesystems[1492]: Found sr0 Apr 13 20:14:11.165220 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:14:11.165445 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:14:11.168139 update-ssh-keys[1596]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:14:11.169212 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:14:11.171873 systemd[1]: Finished sshkeys.service. Apr 13 20:14:11.218392 sshd[1578]: Accepted publickey for core from 20.229.252.112 port 59748 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:11.220625 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:11.228639 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:14:11.237092 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:14:11.240059 systemd-logind[1506]: New session 1 of user core. Apr 13 20:14:11.250851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:14:11.259671 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:14:11.264809 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:14:11.345118 systemd[1604]: Queued start job for default target default.target. Apr 13 20:14:11.350655 systemd[1604]: Created slice app.slice - User Application Slice. Apr 13 20:14:11.350679 systemd[1604]: Reached target paths.target - Paths. Apr 13 20:14:11.350691 systemd[1604]: Reached target timers.target - Timers. Apr 13 20:14:11.355464 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:14:11.362108 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:14:11.362157 systemd[1604]: Reached target sockets.target - Sockets. Apr 13 20:14:11.362172 systemd[1604]: Reached target basic.target - Basic System. Apr 13 20:14:11.362204 systemd[1604]: Reached target default.target - Main User Target. Apr 13 20:14:11.362234 systemd[1604]: Startup finished in 90ms. Apr 13 20:14:11.362352 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:14:11.370634 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:14:11.435109 tar[1511]: linux-amd64/README.md Apr 13 20:14:11.444437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:14:11.551820 systemd[1]: Started sshd@1-204.168.245.167:22-20.229.252.112:59760.service - OpenSSH per-connection server daemon (20.229.252.112:59760). Apr 13 20:14:11.753655 systemd-networkd[1409]: eth1: Gained IPv6LL Apr 13 20:14:11.756388 sshd[1618]: Accepted publickey for core from 20.229.252.112 port 59760 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:11.758983 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:14:11.759597 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:11.765745 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:14:11.778810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:11.785576 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:14:11.809226 systemd-logind[1506]: New session 2 of user core. Apr 13 20:14:11.814082 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:14:11.835775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:14:11.969619 sshd[1618]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:11.971921 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:14:11.972638 systemd[1]: sshd@1-204.168.245.167:22-20.229.252.112:59760.service: Deactivated successfully. Apr 13 20:14:11.975527 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:14:11.976595 systemd-logind[1506]: Removed session 2. Apr 13 20:14:12.009598 systemd[1]: Started sshd@2-204.168.245.167:22-20.229.252.112:59764.service - OpenSSH per-connection server daemon (20.229.252.112:59764). Apr 13 20:14:12.212445 sshd[1636]: Accepted publickey for core from 20.229.252.112 port 59764 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:12.213997 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:12.218002 systemd-logind[1506]: New session 3 of user core. Apr 13 20:14:12.222747 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:14:12.386608 sshd[1636]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:12.392211 systemd[1]: sshd@2-204.168.245.167:22-20.229.252.112:59764.service: Deactivated successfully. Apr 13 20:14:12.392622 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:14:12.394193 systemd-networkd[1409]: eth0: Gained IPv6LL Apr 13 20:14:12.394985 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:14:12.398761 systemd-logind[1506]: Removed session 3. Apr 13 20:14:12.599772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:12.600648 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:14:12.605501 systemd[1]: Startup finished in 1.412s (kernel) + 5.942s (initrd) + 4.503s (userspace) = 11.859s. Apr 13 20:14:12.606170 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:14:13.004707 kubelet[1647]: E0413 20:14:13.004639 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:14:13.007698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:14:13.007887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:14:22.432965 systemd[1]: Started sshd@3-204.168.245.167:22-20.229.252.112:35824.service - OpenSSH per-connection server daemon (20.229.252.112:35824). Apr 13 20:14:22.655398 sshd[1659]: Accepted publickey for core from 20.229.252.112 port 35824 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:22.658113 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:22.664834 systemd-logind[1506]: New session 4 of user core. Apr 13 20:14:22.675999 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:14:22.828888 sshd[1659]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:22.835764 systemd[1]: sshd@3-204.168.245.167:22-20.229.252.112:35824.service: Deactivated successfully. Apr 13 20:14:22.840177 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:14:22.843165 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:14:22.845627 systemd-logind[1506]: Removed session 4. Apr 13 20:14:22.879938 systemd[1]: Started sshd@4-204.168.245.167:22-20.229.252.112:35828.service - OpenSSH per-connection server daemon (20.229.252.112:35828). Apr 13 20:14:23.062984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:14:23.070728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:23.107037 sshd[1666]: Accepted publickey for core from 20.229.252.112 port 35828 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:23.106909 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:23.117737 systemd-logind[1506]: New session 5 of user core. Apr 13 20:14:23.126718 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:14:23.254112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:23.264763 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:14:23.274876 sshd[1666]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:23.279305 systemd[1]: sshd@4-204.168.245.167:22-20.229.252.112:35828.service: Deactivated successfully. Apr 13 20:14:23.281275 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:14:23.283941 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:14:23.285607 systemd-logind[1506]: Removed session 5. Apr 13 20:14:23.301886 kubelet[1678]: E0413 20:14:23.301833 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:14:23.310532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:14:23.310684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:14:23.312884 systemd[1]: Started sshd@5-204.168.245.167:22-20.229.252.112:35832.service - OpenSSH per-connection server daemon (20.229.252.112:35832). Apr 13 20:14:23.515472 sshd[1688]: Accepted publickey for core from 20.229.252.112 port 35832 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:23.517269 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:23.525876 systemd-logind[1506]: New session 6 of user core. Apr 13 20:14:23.541666 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:14:23.687295 sshd[1688]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:23.690268 systemd[1]: sshd@5-204.168.245.167:22-20.229.252.112:35832.service: Deactivated successfully. Apr 13 20:14:23.692211 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:14:23.693822 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:14:23.695299 systemd-logind[1506]: Removed session 6. Apr 13 20:14:23.726696 systemd[1]: Started sshd@6-204.168.245.167:22-20.229.252.112:35842.service - OpenSSH per-connection server daemon (20.229.252.112:35842). Apr 13 20:14:23.938054 sshd[1695]: Accepted publickey for core from 20.229.252.112 port 35842 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:23.939373 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:23.949018 systemd-logind[1506]: New session 7 of user core. Apr 13 20:14:23.956636 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:14:24.090674 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:14:24.091357 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:14:24.107389 sudo[1698]: pam_unix(sudo:session): session closed for user root Apr 13 20:14:24.139633 sshd[1695]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:24.146061 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:14:24.147556 systemd[1]: sshd@6-204.168.245.167:22-20.229.252.112:35842.service: Deactivated successfully. Apr 13 20:14:24.150826 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:14:24.152271 systemd-logind[1506]: Removed session 7. Apr 13 20:14:24.180110 systemd[1]: Started sshd@7-204.168.245.167:22-20.229.252.112:35852.service - OpenSSH per-connection server daemon (20.229.252.112:35852). Apr 13 20:14:24.411526 sshd[1703]: Accepted publickey for core from 20.229.252.112 port 35852 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:24.414520 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:24.420442 systemd-logind[1506]: New session 8 of user core. Apr 13 20:14:24.426625 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:14:24.547427 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:14:24.547791 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:14:24.551351 sudo[1707]: pam_unix(sudo:session): session closed for user root Apr 13 20:14:24.557692 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:14:24.557974 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:14:24.575769 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:14:24.599994 auditctl[1710]: No rules Apr 13 20:14:24.601198 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:14:24.601550 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:14:24.606903 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:14:24.646017 augenrules[1728]: No rules Apr 13 20:14:24.648137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:14:24.649266 sudo[1706]: pam_unix(sudo:session): session closed for user root Apr 13 20:14:24.681490 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:24.687171 systemd[1]: sshd@7-204.168.245.167:22-20.229.252.112:35852.service: Deactivated successfully. Apr 13 20:14:24.689596 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:14:24.690177 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:14:24.691361 systemd-logind[1506]: Removed session 8. Apr 13 20:14:24.719607 systemd[1]: Started sshd@8-204.168.245.167:22-20.229.252.112:35866.service - OpenSSH per-connection server daemon (20.229.252.112:35866). Apr 13 20:14:24.928254 sshd[1736]: Accepted publickey for core from 20.229.252.112 port 35866 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:24.931225 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:24.938041 systemd-logind[1506]: New session 9 of user core. Apr 13 20:14:24.943613 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:14:25.070874 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:14:25.071603 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:14:25.375610 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:14:25.378122 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:14:25.591269 dockerd[1754]: time="2026-04-13T20:14:25.590752910Z" level=info msg="Starting up" Apr 13 20:14:25.653854 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3244526742-merged.mount: Deactivated successfully. Apr 13 20:14:25.670623 systemd[1]: var-lib-docker-metacopy\x2dcheck893505306-merged.mount: Deactivated successfully. Apr 13 20:14:25.685075 dockerd[1754]: time="2026-04-13T20:14:25.685028849Z" level=info msg="Loading containers: start." Apr 13 20:14:25.776467 kernel: Initializing XFRM netlink socket Apr 13 20:14:25.856066 systemd-networkd[1409]: docker0: Link UP Apr 13 20:14:25.876850 dockerd[1754]: time="2026-04-13T20:14:25.876812639Z" level=info msg="Loading containers: done." Apr 13 20:14:25.893140 dockerd[1754]: time="2026-04-13T20:14:25.893099132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:14:25.893307 dockerd[1754]: time="2026-04-13T20:14:25.893192362Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:14:25.893377 dockerd[1754]: time="2026-04-13T20:14:25.893314953Z" level=info msg="Daemon has completed initialization" Apr 13 20:14:25.927754 dockerd[1754]: time="2026-04-13T20:14:25.927548381Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:14:25.927713 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:14:26.422098 containerd[1528]: time="2026-04-13T20:14:26.421994433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 20:14:26.652277 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3721820869-merged.mount: Deactivated successfully. Apr 13 20:14:27.041282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469205171.mount: Deactivated successfully. Apr 13 20:14:28.264921 containerd[1528]: time="2026-04-13T20:14:28.264242888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947842" Apr 13 20:14:28.264921 containerd[1528]: time="2026-04-13T20:14:28.264668018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:28.266222 containerd[1528]: time="2026-04-13T20:14:28.266197519Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:28.267176 containerd[1528]: time="2026-04-13T20:14:28.266932960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:28.267896 containerd[1528]: time="2026-04-13T20:14:28.267758371Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.845688598s" Apr 13 20:14:28.267896 containerd[1528]: time="2026-04-13T20:14:28.267782611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 13 20:14:28.268308 containerd[1528]: time="2026-04-13T20:14:28.268279591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 20:14:29.458005 containerd[1528]: time="2026-04-13T20:14:29.457943612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:29.459386 containerd[1528]: time="2026-04-13T20:14:29.459220763Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165834" Apr 13 20:14:29.460580 containerd[1528]: time="2026-04-13T20:14:29.460286134Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:29.462753 containerd[1528]: time="2026-04-13T20:14:29.462724176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:29.463768 containerd[1528]: time="2026-04-13T20:14:29.463736577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 1.195428846s" Apr 13 20:14:29.463882 containerd[1528]: time="2026-04-13T20:14:29.463865827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 13 20:14:29.464570 containerd[1528]: time="2026-04-13T20:14:29.464554038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 20:14:30.518515 containerd[1528]: time="2026-04-13T20:14:30.518457906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:30.519535 containerd[1528]: time="2026-04-13T20:14:30.519366696Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729869" Apr 13 20:14:30.520876 containerd[1528]: time="2026-04-13T20:14:30.520101777Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:30.522108 containerd[1528]: time="2026-04-13T20:14:30.522078999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:30.523403 containerd[1528]: time="2026-04-13T20:14:30.522808939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 1.058167301s" Apr 13 20:14:30.523403 containerd[1528]: time="2026-04-13T20:14:30.522833369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 13 20:14:30.523639 containerd[1528]: time="2026-04-13T20:14:30.523626170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 20:14:31.506448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281597494.mount: Deactivated successfully. Apr 13 20:14:31.696841 containerd[1528]: time="2026-04-13T20:14:31.696786747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:31.697894 containerd[1528]: time="2026-04-13T20:14:31.697862478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861802" Apr 13 20:14:31.698832 containerd[1528]: time="2026-04-13T20:14:31.698794219Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:31.700395 containerd[1528]: time="2026-04-13T20:14:31.700378570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:31.700932 containerd[1528]: time="2026-04-13T20:14:31.700781461Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.177077861s" Apr 13 20:14:31.700932 containerd[1528]: time="2026-04-13T20:14:31.700806521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 20:14:31.701256 containerd[1528]: time="2026-04-13T20:14:31.701230681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 20:14:32.236972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120421929.mount: Deactivated successfully. Apr 13 20:14:33.203935 containerd[1528]: time="2026-04-13T20:14:33.203869653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.204976 containerd[1528]: time="2026-04-13T20:14:33.204756353Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Apr 13 20:14:33.206696 containerd[1528]: time="2026-04-13T20:14:33.205733404Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.207869 containerd[1528]: time="2026-04-13T20:14:33.207839506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.208907 containerd[1528]: time="2026-04-13T20:14:33.208878057Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.507625266s" Apr 13 20:14:33.208945 containerd[1528]: time="2026-04-13T20:14:33.208909467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 20:14:33.209405 containerd[1528]: time="2026-04-13T20:14:33.209381607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:14:33.465100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:14:33.472203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:33.602647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:33.606278 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:14:33.634474 kubelet[2026]: E0413 20:14:33.634395 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:14:33.637433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:14:33.637628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:14:33.676230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236940223.mount: Deactivated successfully. Apr 13 20:14:33.681859 containerd[1528]: time="2026-04-13T20:14:33.681823841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.682673 containerd[1528]: time="2026-04-13T20:14:33.682503131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 13 20:14:33.684425 containerd[1528]: time="2026-04-13T20:14:33.683453782Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.685406 containerd[1528]: time="2026-04-13T20:14:33.685370754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:33.686326 containerd[1528]: time="2026-04-13T20:14:33.685962344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 476.555947ms" Apr 13 20:14:33.686326 containerd[1528]: time="2026-04-13T20:14:33.685990514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:14:33.686570 containerd[1528]: time="2026-04-13T20:14:33.686544845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 20:14:34.210406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197236026.mount: Deactivated successfully. Apr 13 20:14:34.885437 containerd[1528]: time="2026-04-13T20:14:34.885379314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:34.886294 containerd[1528]: time="2026-04-13T20:14:34.886158694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874325" Apr 13 20:14:34.887245 containerd[1528]: time="2026-04-13T20:14:34.886992635Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:34.889035 containerd[1528]: time="2026-04-13T20:14:34.889010197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:34.889922 containerd[1528]: time="2026-04-13T20:14:34.889895437Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.203328102s" Apr 13 20:14:34.889963 containerd[1528]: time="2026-04-13T20:14:34.889921347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 20:14:37.560384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:37.565565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:37.590356 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-9.scope)... Apr 13 20:14:37.590503 systemd[1]: Reloading... Apr 13 20:14:37.694436 zram_generator::config[2165]: No configuration found. Apr 13 20:14:37.780653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:14:37.841040 systemd[1]: Reloading finished in 250 ms. Apr 13 20:14:37.894526 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:14:37.894615 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:14:37.894844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:37.904847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:38.042356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:38.051717 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:14:38.079456 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:14:38.079456 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:14:38.079456 kubelet[2219]: I0413 20:14:38.079077 2219 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:14:38.466881 kubelet[2219]: I0413 20:14:38.466842 2219 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:14:38.466881 kubelet[2219]: I0413 20:14:38.466864 2219 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:14:38.467486 kubelet[2219]: I0413 20:14:38.467469 2219 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:14:38.467486 kubelet[2219]: I0413 20:14:38.467483 2219 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:14:38.467681 kubelet[2219]: I0413 20:14:38.467660 2219 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:14:38.472778 kubelet[2219]: E0413 20:14:38.472738 2219 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://204.168.245.167:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:14:38.473072 kubelet[2219]: I0413 20:14:38.473050 2219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:14:38.476868 kubelet[2219]: E0413 20:14:38.476828 2219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:14:38.476926 kubelet[2219]: I0413 20:14:38.476872 2219 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:14:38.480972 kubelet[2219]: I0413 20:14:38.480953 2219 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:14:38.482684 kubelet[2219]: I0413 20:14:38.482462 2219 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:14:38.482732 kubelet[2219]: I0413 20:14:38.482579 2219 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-b4460b9a5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:14:38.482732 kubelet[2219]: I0413 20:14:38.482714 2219 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:14:38.482732 kubelet[2219]: I0413 20:14:38.482724 2219 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:14:38.482872 kubelet[2219]: I0413 20:14:38.482816 2219 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:14:38.488507 kubelet[2219]: I0413 20:14:38.488487 2219 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:14:38.488656 kubelet[2219]: I0413 20:14:38.488635 2219 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:14:38.488656 kubelet[2219]: I0413 20:14:38.488651 2219 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:14:38.488714 kubelet[2219]: I0413 20:14:38.488677 2219 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:14:38.488714 kubelet[2219]: I0413 20:14:38.488692 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:14:38.491448 kubelet[2219]: I0413 20:14:38.490535 2219 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:14:38.491448 kubelet[2219]: I0413 20:14:38.490968 2219 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:14:38.491448 kubelet[2219]: I0413 20:14:38.490993 2219 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:14:38.491448 kubelet[2219]: W0413 20:14:38.491043 2219 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:14:38.494599 kubelet[2219]: I0413 20:14:38.494582 2219 server.go:1262] "Started kubelet" Apr 13 20:14:38.494754 kubelet[2219]: E0413 20:14:38.494731 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.245.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:14:38.494838 kubelet[2219]: E0413 20:14:38.494817 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.245.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-7-b4460b9a5e&limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:14:38.495368 kubelet[2219]: I0413 20:14:38.495323 2219 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:14:38.497039 kubelet[2219]: I0413 20:14:38.496869 2219 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:14:38.498511 kubelet[2219]: I0413 20:14:38.497926 2219 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:14:38.498511 kubelet[2219]: I0413 20:14:38.497978 2219 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:14:38.498511 kubelet[2219]: I0413 20:14:38.498229 2219 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:14:38.499504 kubelet[2219]: E0413 20:14:38.498397 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.245.167:6443/api/v1/namespaces/default/events\": dial tcp 204.168.245.167:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-7-b4460b9a5e.18a603cc2c3badb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-7-b4460b9a5e,UID:ci-4081-3-7-7-b4460b9a5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-b4460b9a5e,},FirstTimestamp:2026-04-13 20:14:38.49455967 +0000 UTC m=+0.440100097,LastTimestamp:2026-04-13 20:14:38.49455967 +0000 UTC m=+0.440100097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-b4460b9a5e,}" Apr 13 20:14:38.501298 kubelet[2219]: I0413 20:14:38.501268 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:14:38.502532 kubelet[2219]: I0413 20:14:38.501941 2219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:14:38.505570 kubelet[2219]: E0413 20:14:38.505072 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:38.505570 kubelet[2219]: I0413 20:14:38.505101 2219 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:14:38.505570 kubelet[2219]: I0413 20:14:38.505218 2219 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:14:38.505570 kubelet[2219]: I0413 20:14:38.505256 2219 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:14:38.505964 kubelet[2219]: E0413 20:14:38.505944 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.245.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:14:38.506045 kubelet[2219]: E0413 20:14:38.506026 2219 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:14:38.506230 kubelet[2219]: I0413 20:14:38.506212 2219 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:14:38.506300 kubelet[2219]: I0413 20:14:38.506283 2219 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:14:38.507302 kubelet[2219]: E0413 20:14:38.507274 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.245.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-b4460b9a5e?timeout=10s\": dial tcp 204.168.245.167:6443: connect: connection refused" interval="200ms" Apr 13 20:14:38.507457 kubelet[2219]: I0413 20:14:38.507441 2219 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:14:38.523850 kubelet[2219]: I0413 20:14:38.523805 2219 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:14:38.524162 kubelet[2219]: I0413 20:14:38.523920 2219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:14:38.524162 kubelet[2219]: I0413 20:14:38.523935 2219 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:14:38.530171 kubelet[2219]: I0413 20:14:38.529706 2219 policy_none.go:49] "None policy: Start" Apr 13 20:14:38.530171 kubelet[2219]: I0413 20:14:38.529722 2219 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:14:38.530171 kubelet[2219]: I0413 20:14:38.529734 2219 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:14:38.531839 kubelet[2219]: I0413 20:14:38.531821 2219 policy_none.go:47] "Start" Apr 13 20:14:38.532131 kubelet[2219]: I0413 20:14:38.532104 2219 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:14:38.533978 kubelet[2219]: I0413 20:14:38.533964 2219 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:14:38.534155 kubelet[2219]: I0413 20:14:38.534144 2219 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:14:38.536967 kubelet[2219]: I0413 20:14:38.536954 2219 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:14:38.537066 kubelet[2219]: E0413 20:14:38.537052 2219 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:14:38.538734 kubelet[2219]: E0413 20:14:38.538715 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.245.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:14:38.541820 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:14:38.549875 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:14:38.552685 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:14:38.563295 kubelet[2219]: E0413 20:14:38.563274 2219 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:14:38.563769 kubelet[2219]: I0413 20:14:38.563757 2219 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:14:38.563855 kubelet[2219]: I0413 20:14:38.563829 2219 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:14:38.564876 kubelet[2219]: E0413 20:14:38.564855 2219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:14:38.564924 kubelet[2219]: E0413 20:14:38.564887 2219 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:38.565190 kubelet[2219]: I0413 20:14:38.565179 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:14:38.667026 kubelet[2219]: I0413 20:14:38.666990 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.670441 kubelet[2219]: E0413 20:14:38.668559 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.245.167:6443/api/v1/nodes\": dial tcp 204.168.245.167:6443: connect: connection refused" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.684426 systemd[1]: Created slice kubepods-burstable-pod3b7af90a507eaa382ebee1ce1297d124.slice - libcontainer container kubepods-burstable-pod3b7af90a507eaa382ebee1ce1297d124.slice. Apr 13 20:14:38.696510 kubelet[2219]: E0413 20:14:38.695823 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.703311 systemd[1]: Created slice kubepods-burstable-pod2a563e95397c8c6c45b35e4634d8c4f8.slice - libcontainer container kubepods-burstable-pod2a563e95397c8c6c45b35e4634d8c4f8.slice. Apr 13 20:14:38.705170 kubelet[2219]: E0413 20:14:38.704967 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.707705 kubelet[2219]: E0413 20:14:38.707677 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.245.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-b4460b9a5e?timeout=10s\": dial tcp 204.168.245.167:6443: connect: connection refused" interval="400ms" Apr 13 20:14:38.712057 systemd[1]: Created slice kubepods-burstable-pod99c8e45870b2c0c2c21f1c48b6cf9f79.slice - libcontainer container kubepods-burstable-pod99c8e45870b2c0c2c21f1c48b6cf9f79.slice. Apr 13 20:14:38.713604 kubelet[2219]: E0413 20:14:38.713568 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807382 kubelet[2219]: I0413 20:14:38.807128 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807382 kubelet[2219]: I0413 20:14:38.807199 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807382 kubelet[2219]: I0413 20:14:38.807246 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807382 kubelet[2219]: I0413 20:14:38.807280 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807382 kubelet[2219]: I0413 20:14:38.807315 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807794 kubelet[2219]: I0413 20:14:38.807352 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807794 kubelet[2219]: I0413 20:14:38.807404 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99c8e45870b2c0c2c21f1c48b6cf9f79-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-b4460b9a5e\" (UID: \"99c8e45870b2c0c2c21f1c48b6cf9f79\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807794 kubelet[2219]: I0413 20:14:38.807496 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.807794 kubelet[2219]: I0413 20:14:38.807531 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.871766 kubelet[2219]: I0413 20:14:38.871661 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:38.872277 kubelet[2219]: E0413 20:14:38.872097 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.245.167:6443/api/v1/nodes\": dial tcp 204.168.245.167:6443: connect: connection refused" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:39.000228 containerd[1528]: time="2026-04-13T20:14:39.000167761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-b4460b9a5e,Uid:3b7af90a507eaa382ebee1ce1297d124,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:39.007747 containerd[1528]: time="2026-04-13T20:14:39.007709578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-b4460b9a5e,Uid:2a563e95397c8c6c45b35e4634d8c4f8,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:39.015872 containerd[1528]: time="2026-04-13T20:14:39.015827614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-b4460b9a5e,Uid:99c8e45870b2c0c2c21f1c48b6cf9f79,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:39.109235 kubelet[2219]: E0413 20:14:39.109105 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.245.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-b4460b9a5e?timeout=10s\": dial tcp 204.168.245.167:6443: connect: connection refused" interval="800ms" Apr 13 20:14:39.274608 kubelet[2219]: I0413 20:14:39.274541 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:39.274860 kubelet[2219]: E0413 20:14:39.274809 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.245.167:6443/api/v1/nodes\": dial tcp 204.168.245.167:6443: connect: connection refused" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:39.404519 kubelet[2219]: E0413 20:14:39.404292 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.245.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-7-b4460b9a5e&limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:14:39.482194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872590150.mount: Deactivated successfully. Apr 13 20:14:39.491475 containerd[1528]: time="2026-04-13T20:14:39.490790330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:14:39.491556 containerd[1528]: time="2026-04-13T20:14:39.491483621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:14:39.492333 containerd[1528]: time="2026-04-13T20:14:39.492296321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:14:39.492947 containerd[1528]: time="2026-04-13T20:14:39.492879112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:14:39.493808 containerd[1528]: time="2026-04-13T20:14:39.493781463Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:14:39.494722 containerd[1528]: time="2026-04-13T20:14:39.494691643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 13 20:14:39.495393 containerd[1528]: time="2026-04-13T20:14:39.495334424Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:14:39.498322 containerd[1528]: time="2026-04-13T20:14:39.498218676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:14:39.499473 containerd[1528]: time="2026-04-13T20:14:39.499445357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.684159ms" Apr 13 20:14:39.500232 containerd[1528]: time="2026-04-13T20:14:39.500186128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.932177ms" Apr 13 20:14:39.501528 containerd[1528]: time="2026-04-13T20:14:39.501489959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.597944ms" Apr 13 20:14:39.525843 kubelet[2219]: E0413 20:14:39.525786 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.245.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:14:39.594463 containerd[1528]: time="2026-04-13T20:14:39.594381586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:39.594592 containerd[1528]: time="2026-04-13T20:14:39.594489407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:39.594592 containerd[1528]: time="2026-04-13T20:14:39.594499467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.594650 containerd[1528]: time="2026-04-13T20:14:39.594604867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.598641 containerd[1528]: time="2026-04-13T20:14:39.596914799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:39.598641 containerd[1528]: time="2026-04-13T20:14:39.596947489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:39.598641 containerd[1528]: time="2026-04-13T20:14:39.596973799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.598641 containerd[1528]: time="2026-04-13T20:14:39.597061409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.599919 containerd[1528]: time="2026-04-13T20:14:39.599544971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:39.599919 containerd[1528]: time="2026-04-13T20:14:39.599573471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:39.599919 containerd[1528]: time="2026-04-13T20:14:39.599581081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.599919 containerd[1528]: time="2026-04-13T20:14:39.599648941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:39.624615 systemd[1]: Started cri-containerd-fca87786f96cf82bae9857835ec0f7f4cadd6bf65a54dc25328d5e69b828ce61.scope - libcontainer container fca87786f96cf82bae9857835ec0f7f4cadd6bf65a54dc25328d5e69b828ce61. Apr 13 20:14:39.630189 systemd[1]: Started cri-containerd-9452a70f791d6b49be911c37e11235b208644f2e8bdebe033e84c8cb7a6395ec.scope - libcontainer container 9452a70f791d6b49be911c37e11235b208644f2e8bdebe033e84c8cb7a6395ec. Apr 13 20:14:39.632609 systemd[1]: Started cri-containerd-b9c5f372212cae42776bddef054e9f891bf7c6ff84619e5de83be2d8d919f4f1.scope - libcontainer container b9c5f372212cae42776bddef054e9f891bf7c6ff84619e5de83be2d8d919f4f1. Apr 13 20:14:39.679663 containerd[1528]: time="2026-04-13T20:14:39.679441487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-b4460b9a5e,Uid:3b7af90a507eaa382ebee1ce1297d124,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca87786f96cf82bae9857835ec0f7f4cadd6bf65a54dc25328d5e69b828ce61\"" Apr 13 20:14:39.682020 containerd[1528]: time="2026-04-13T20:14:39.681949939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-b4460b9a5e,Uid:2a563e95397c8c6c45b35e4634d8c4f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9452a70f791d6b49be911c37e11235b208644f2e8bdebe033e84c8cb7a6395ec\"" Apr 13 20:14:39.691457 containerd[1528]: time="2026-04-13T20:14:39.690598117Z" level=info msg="CreateContainer within sandbox \"fca87786f96cf82bae9857835ec0f7f4cadd6bf65a54dc25328d5e69b828ce61\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:14:39.694202 containerd[1528]: time="2026-04-13T20:14:39.694153500Z" level=info msg="CreateContainer within sandbox \"9452a70f791d6b49be911c37e11235b208644f2e8bdebe033e84c8cb7a6395ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:14:39.699214 containerd[1528]: time="2026-04-13T20:14:39.699170734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-b4460b9a5e,Uid:99c8e45870b2c0c2c21f1c48b6cf9f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9c5f372212cae42776bddef054e9f891bf7c6ff84619e5de83be2d8d919f4f1\"" Apr 13 20:14:39.706249 containerd[1528]: time="2026-04-13T20:14:39.706206470Z" level=info msg="CreateContainer within sandbox \"b9c5f372212cae42776bddef054e9f891bf7c6ff84619e5de83be2d8d919f4f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:14:39.721823 containerd[1528]: time="2026-04-13T20:14:39.721697033Z" level=info msg="CreateContainer within sandbox \"9452a70f791d6b49be911c37e11235b208644f2e8bdebe033e84c8cb7a6395ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1054acea08f7ce2e572533e2cfff8a708d00fc0063f5a966800ac3affd9308ca\"" Apr 13 20:14:39.722433 containerd[1528]: time="2026-04-13T20:14:39.722224093Z" level=info msg="StartContainer for \"1054acea08f7ce2e572533e2cfff8a708d00fc0063f5a966800ac3affd9308ca\"" Apr 13 20:14:39.725911 containerd[1528]: time="2026-04-13T20:14:39.725872096Z" level=info msg="CreateContainer within sandbox \"b9c5f372212cae42776bddef054e9f891bf7c6ff84619e5de83be2d8d919f4f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4652f87b7e1828ee21bd56fb6603141628a77c3c361fec7062046e9f56dc1d71\"" Apr 13 20:14:39.726285 containerd[1528]: time="2026-04-13T20:14:39.726270196Z" level=info msg="CreateContainer within sandbox \"fca87786f96cf82bae9857835ec0f7f4cadd6bf65a54dc25328d5e69b828ce61\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92624317c6593781d1cc6b4410c1589414aeae96edb6e512ea7b0877621af3f0\"" Apr 13 20:14:39.726864 containerd[1528]: time="2026-04-13T20:14:39.726851407Z" level=info msg="StartContainer for \"4652f87b7e1828ee21bd56fb6603141628a77c3c361fec7062046e9f56dc1d71\"" Apr 13 20:14:39.727497 containerd[1528]: time="2026-04-13T20:14:39.726972307Z" level=info msg="StartContainer for \"92624317c6593781d1cc6b4410c1589414aeae96edb6e512ea7b0877621af3f0\"" Apr 13 20:14:39.751654 systemd[1]: Started cri-containerd-1054acea08f7ce2e572533e2cfff8a708d00fc0063f5a966800ac3affd9308ca.scope - libcontainer container 1054acea08f7ce2e572533e2cfff8a708d00fc0063f5a966800ac3affd9308ca. Apr 13 20:14:39.761194 systemd[1]: Started cri-containerd-92624317c6593781d1cc6b4410c1589414aeae96edb6e512ea7b0877621af3f0.scope - libcontainer container 92624317c6593781d1cc6b4410c1589414aeae96edb6e512ea7b0877621af3f0. Apr 13 20:14:39.768025 systemd[1]: Started cri-containerd-4652f87b7e1828ee21bd56fb6603141628a77c3c361fec7062046e9f56dc1d71.scope - libcontainer container 4652f87b7e1828ee21bd56fb6603141628a77c3c361fec7062046e9f56dc1d71. Apr 13 20:14:39.812279 containerd[1528]: time="2026-04-13T20:14:39.811975598Z" level=info msg="StartContainer for \"1054acea08f7ce2e572533e2cfff8a708d00fc0063f5a966800ac3affd9308ca\" returns successfully" Apr 13 20:14:39.813519 kubelet[2219]: E0413 20:14:39.813497 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.245.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.245.167:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:14:39.824985 containerd[1528]: time="2026-04-13T20:14:39.824949839Z" level=info msg="StartContainer for \"92624317c6593781d1cc6b4410c1589414aeae96edb6e512ea7b0877621af3f0\" returns successfully" Apr 13 20:14:39.833610 containerd[1528]: time="2026-04-13T20:14:39.833575716Z" level=info msg="StartContainer for \"4652f87b7e1828ee21bd56fb6603141628a77c3c361fec7062046e9f56dc1d71\" returns successfully" Apr 13 20:14:40.077239 kubelet[2219]: I0413 20:14:40.077186 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:40.550923 kubelet[2219]: E0413 20:14:40.550895 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:40.556740 kubelet[2219]: E0413 20:14:40.556587 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:40.558972 kubelet[2219]: E0413 20:14:40.558961 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:40.928185 kubelet[2219]: E0413 20:14:40.928026 2219 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:41.088180 kubelet[2219]: I0413 20:14:41.086649 2219 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:41.088180 kubelet[2219]: E0413 20:14:41.086676 2219 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-7-b4460b9a5e\": node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.109484 kubelet[2219]: E0413 20:14:41.109455 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.210194 kubelet[2219]: E0413 20:14:41.210099 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.310644 kubelet[2219]: E0413 20:14:41.310590 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.412434 kubelet[2219]: E0413 20:14:41.411436 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.512451 kubelet[2219]: E0413 20:14:41.512366 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.560875 kubelet[2219]: E0413 20:14:41.560686 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:41.561685 kubelet[2219]: E0413 20:14:41.561648 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:41.612705 kubelet[2219]: E0413 20:14:41.612656 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.713488 kubelet[2219]: E0413 20:14:41.713449 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.814544 kubelet[2219]: E0413 20:14:41.814342 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:41.915577 kubelet[2219]: E0413 20:14:41.915477 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:42.108225 kubelet[2219]: I0413 20:14:42.107979 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:42.116001 kubelet[2219]: I0413 20:14:42.115910 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:42.120873 kubelet[2219]: I0413 20:14:42.120693 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:42.491443 kubelet[2219]: I0413 20:14:42.491305 2219 apiserver.go:52] "Watching apiserver" Apr 13 20:14:42.505662 kubelet[2219]: I0413 20:14:42.505626 2219 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:14:43.037480 systemd[1]: Reloading requested from client PID 2505 ('systemctl') (unit session-9.scope)... Apr 13 20:14:43.037507 systemd[1]: Reloading... Apr 13 20:14:43.143449 zram_generator::config[2545]: No configuration found. Apr 13 20:14:43.247875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:14:43.318504 systemd[1]: Reloading finished in 280 ms. Apr 13 20:14:43.365129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:43.391072 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:14:43.391309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:43.396687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:14:43.533694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:14:43.534708 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:14:43.572095 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:14:43.572095 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:14:43.572095 kubelet[2596]: I0413 20:14:43.571644 2596 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:14:43.579450 kubelet[2596]: I0413 20:14:43.578698 2596 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:14:43.579450 kubelet[2596]: I0413 20:14:43.578724 2596 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:14:43.579450 kubelet[2596]: I0413 20:14:43.578755 2596 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:14:43.579450 kubelet[2596]: I0413 20:14:43.578768 2596 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:14:43.579450 kubelet[2596]: I0413 20:14:43.578954 2596 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:14:43.580504 kubelet[2596]: I0413 20:14:43.580395 2596 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:14:43.582748 kubelet[2596]: I0413 20:14:43.582729 2596 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:14:43.587842 kubelet[2596]: E0413 20:14:43.587821 2596 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:14:43.588060 kubelet[2596]: I0413 20:14:43.588050 2596 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:14:43.594253 kubelet[2596]: I0413 20:14:43.594230 2596 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:14:43.594802 kubelet[2596]: I0413 20:14:43.594778 2596 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:14:43.595192 kubelet[2596]: I0413 20:14:43.594891 2596 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-b4460b9a5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:14:43.595342 kubelet[2596]: I0413 20:14:43.595327 2596 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:14:43.595482 kubelet[2596]: I0413 20:14:43.595452 2596 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:14:43.595614 kubelet[2596]: I0413 20:14:43.595551 2596 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:14:43.597063 kubelet[2596]: I0413 20:14:43.596548 2596 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:14:43.597063 kubelet[2596]: I0413 20:14:43.596723 2596 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:14:43.597063 kubelet[2596]: I0413 20:14:43.596736 2596 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:14:43.597063 kubelet[2596]: I0413 20:14:43.596763 2596 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:14:43.597063 kubelet[2596]: I0413 20:14:43.596779 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:14:43.600872 kubelet[2596]: I0413 20:14:43.600835 2596 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:14:43.601288 kubelet[2596]: I0413 20:14:43.601266 2596 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:14:43.601342 kubelet[2596]: I0413 20:14:43.601296 2596 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:14:43.609778 kubelet[2596]: I0413 20:14:43.609750 2596 server.go:1262] "Started kubelet" Apr 13 20:14:43.611239 kubelet[2596]: I0413 20:14:43.611204 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:14:43.611850 kubelet[2596]: I0413 20:14:43.611818 2596 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:14:43.612551 kubelet[2596]: I0413 20:14:43.612517 2596 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:14:43.612585 kubelet[2596]: I0413 20:14:43.612567 2596 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:14:43.616850 kubelet[2596]: I0413 20:14:43.616834 2596 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:14:43.617135 kubelet[2596]: I0413 20:14:43.617126 2596 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:14:43.619237 kubelet[2596]: I0413 20:14:43.619185 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:14:43.622194 kubelet[2596]: I0413 20:14:43.621181 2596 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:14:43.622194 kubelet[2596]: E0413 20:14:43.621303 2596 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-b4460b9a5e\" not found" Apr 13 20:14:43.622194 kubelet[2596]: I0413 20:14:43.622170 2596 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:14:43.622318 kubelet[2596]: I0413 20:14:43.622256 2596 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:14:43.625529 kubelet[2596]: I0413 20:14:43.625040 2596 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:14:43.625529 kubelet[2596]: I0413 20:14:43.625145 2596 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:14:43.626450 kubelet[2596]: E0413 20:14:43.626407 2596 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:14:43.626665 kubelet[2596]: I0413 20:14:43.626637 2596 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:14:43.639155 kubelet[2596]: I0413 20:14:43.639120 2596 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:14:43.641181 kubelet[2596]: I0413 20:14:43.641159 2596 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:14:43.641181 kubelet[2596]: I0413 20:14:43.641178 2596 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:14:43.641289 kubelet[2596]: I0413 20:14:43.641196 2596 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:14:43.641289 kubelet[2596]: E0413 20:14:43.641238 2596 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:14:43.670369 kubelet[2596]: I0413 20:14:43.670345 2596 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:14:43.670536 kubelet[2596]: I0413 20:14:43.670526 2596 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:14:43.670574 kubelet[2596]: I0413 20:14:43.670569 2596 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:14:43.670717 kubelet[2596]: I0413 20:14:43.670708 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:14:43.670764 kubelet[2596]: I0413 20:14:43.670751 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:14:43.670794 kubelet[2596]: I0413 20:14:43.670788 2596 policy_none.go:49] "None policy: Start" Apr 13 20:14:43.670834 kubelet[2596]: I0413 20:14:43.670828 2596 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:14:43.670866 kubelet[2596]: I0413 20:14:43.670860 2596 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:14:43.670960 kubelet[2596]: I0413 20:14:43.670953 2596 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:14:43.670989 kubelet[2596]: I0413 20:14:43.670984 2596 policy_none.go:47] "Start" Apr 13 20:14:43.674994 kubelet[2596]: E0413 20:14:43.674979 2596 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:14:43.675509 kubelet[2596]: I0413 20:14:43.675498 2596 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:14:43.675578 kubelet[2596]: I0413 20:14:43.675558 2596 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:14:43.675869 kubelet[2596]: I0413 20:14:43.675858 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:14:43.677267 kubelet[2596]: E0413 20:14:43.677254 2596 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:14:43.743929 kubelet[2596]: I0413 20:14:43.743310 2596 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.743929 kubelet[2596]: I0413 20:14:43.743382 2596 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.743929 kubelet[2596]: I0413 20:14:43.743823 2596 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.750830 kubelet[2596]: E0413 20:14:43.750781 2596 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-b4460b9a5e\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.752580 kubelet[2596]: E0413 20:14:43.752505 2596 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.752811 kubelet[2596]: E0413 20:14:43.752792 2596 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.779872 kubelet[2596]: I0413 20:14:43.779789 2596 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.789556 kubelet[2596]: I0413 20:14:43.789498 2596 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.789726 kubelet[2596]: I0413 20:14:43.789698 2596 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924149 kubelet[2596]: I0413 20:14:43.924028 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924149 kubelet[2596]: I0413 20:14:43.924085 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99c8e45870b2c0c2c21f1c48b6cf9f79-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-b4460b9a5e\" (UID: \"99c8e45870b2c0c2c21f1c48b6cf9f79\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924149 kubelet[2596]: I0413 20:14:43.924111 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924149 kubelet[2596]: I0413 20:14:43.924143 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924294 kubelet[2596]: I0413 20:14:43.924183 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924294 kubelet[2596]: I0413 20:14:43.924212 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924294 kubelet[2596]: I0413 20:14:43.924235 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b7af90a507eaa382ebee1ce1297d124-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" (UID: \"3b7af90a507eaa382ebee1ce1297d124\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924294 kubelet[2596]: I0413 20:14:43.924258 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:43.924367 kubelet[2596]: I0413 20:14:43.924297 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a563e95397c8c6c45b35e4634d8c4f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-b4460b9a5e\" (UID: \"2a563e95397c8c6c45b35e4634d8c4f8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:44.034584 sudo[2634]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 20:14:44.034947 sudo[2634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 20:14:44.432100 sudo[2634]: pam_unix(sudo:session): session closed for user root Apr 13 20:14:44.600321 kubelet[2596]: I0413 20:14:44.598827 2596 apiserver.go:52] "Watching apiserver" Apr 13 20:14:44.622495 kubelet[2596]: I0413 20:14:44.622356 2596 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:14:44.659592 kubelet[2596]: I0413 20:14:44.659555 2596 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:44.667511 kubelet[2596]: E0413 20:14:44.667248 2596 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-b4460b9a5e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" Apr 13 20:14:44.697324 kubelet[2596]: I0413 20:14:44.697193 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-7-b4460b9a5e" podStartSLOduration=2.697178205 podStartE2EDuration="2.697178205s" podCreationTimestamp="2026-04-13 20:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:14:44.688227329 +0000 UTC m=+1.149696754" watchObservedRunningTime="2026-04-13 20:14:44.697178205 +0000 UTC m=+1.158647620" Apr 13 20:14:44.698119 kubelet[2596]: I0413 20:14:44.697909 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-b4460b9a5e" podStartSLOduration=2.697901225 podStartE2EDuration="2.697901225s" podCreationTimestamp="2026-04-13 20:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:14:44.697799365 +0000 UTC m=+1.159268780" watchObservedRunningTime="2026-04-13 20:14:44.697901225 +0000 UTC m=+1.159370630" Apr 13 20:14:44.721548 kubelet[2596]: I0413 20:14:44.721436 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-7-b4460b9a5e" podStartSLOduration=2.7214244069999998 podStartE2EDuration="2.721424407s" podCreationTimestamp="2026-04-13 20:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:14:44.707659331 +0000 UTC m=+1.169128746" watchObservedRunningTime="2026-04-13 20:14:44.721424407 +0000 UTC m=+1.182893812" Apr 13 20:14:45.707244 sudo[1739]: pam_unix(sudo:session): session closed for user root Apr 13 20:14:45.738034 sshd[1736]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:45.743547 systemd[1]: sshd@8-204.168.245.167:22-20.229.252.112:35866.service: Deactivated successfully. Apr 13 20:14:45.747036 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:14:45.747529 systemd[1]: session-9.scope: Consumed 4.673s CPU time, 160.5M memory peak, 0B memory swap peak. Apr 13 20:14:45.749427 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:14:45.750788 systemd-logind[1506]: Removed session 9. Apr 13 20:14:50.233034 kubelet[2596]: I0413 20:14:50.232911 2596 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:14:50.234207 kubelet[2596]: I0413 20:14:50.233869 2596 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:14:50.234263 containerd[1528]: time="2026-04-13T20:14:50.233693432Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:14:50.950453 systemd[1]: Created slice kubepods-besteffort-poda6673f95_0241_4215_9cfc_44a3d2563af3.slice - libcontainer container kubepods-besteffort-poda6673f95_0241_4215_9cfc_44a3d2563af3.slice. Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972776 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-cgroup\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972811 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-etc-cni-netd\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972833 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-lib-modules\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972875 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-kernel\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972896 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6673f95-0241-4215-9cfc-44a3d2563af3-xtables-lock\") pod \"kube-proxy-nsf4t\" (UID: \"a6673f95-0241-4215-9cfc-44a3d2563af3\") " pod="kube-system/kube-proxy-nsf4t" Apr 13 20:14:50.973404 kubelet[2596]: I0413 20:14:50.972914 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-net\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.972932 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.972950 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-bpf-maps\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.972967 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hostproc\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.972983 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cni-path\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.972999 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-xtables-lock\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973597 kubelet[2596]: I0413 20:14:50.973017 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hubble-tls\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973685 kubelet[2596]: I0413 20:14:50.973038 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6673f95-0241-4215-9cfc-44a3d2563af3-kube-proxy\") pod \"kube-proxy-nsf4t\" (UID: \"a6673f95-0241-4215-9cfc-44a3d2563af3\") " pod="kube-system/kube-proxy-nsf4t" Apr 13 20:14:50.973685 kubelet[2596]: I0413 20:14:50.973055 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qtz\" (UniqueName: \"kubernetes.io/projected/a6673f95-0241-4215-9cfc-44a3d2563af3-kube-api-access-l4qtz\") pod \"kube-proxy-nsf4t\" (UID: \"a6673f95-0241-4215-9cfc-44a3d2563af3\") " pod="kube-system/kube-proxy-nsf4t" Apr 13 20:14:50.973685 kubelet[2596]: I0413 20:14:50.973073 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-run\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973685 kubelet[2596]: I0413 20:14:50.973091 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cc02328-06f0-4a98-b1ae-24ff4a044a72-clustermesh-secrets\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973685 kubelet[2596]: I0413 20:14:50.973109 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-config-path\") pod \"cilium-nhq4z\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " pod="kube-system/cilium-nhq4z" Apr 13 20:14:50.973763 kubelet[2596]: I0413 20:14:50.973130 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6673f95-0241-4215-9cfc-44a3d2563af3-lib-modules\") pod \"kube-proxy-nsf4t\" (UID: \"a6673f95-0241-4215-9cfc-44a3d2563af3\") " pod="kube-system/kube-proxy-nsf4t" Apr 13 20:14:50.974617 systemd[1]: Created slice kubepods-burstable-pod2cc02328_06f0_4a98_b1ae_24ff4a044a72.slice - libcontainer container kubepods-burstable-pod2cc02328_06f0_4a98_b1ae_24ff4a044a72.slice. Apr 13 20:14:51.089963 kubelet[2596]: E0413 20:14:51.089594 2596 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:14:51.089963 kubelet[2596]: E0413 20:14:51.089636 2596 projected.go:196] Error preparing data for projected volume kube-api-access-77gf5 for pod kube-system/cilium-nhq4z: configmap "kube-root-ca.crt" not found Apr 13 20:14:51.089963 kubelet[2596]: E0413 20:14:51.089721 2596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5 podName:2cc02328-06f0-4a98-b1ae-24ff4a044a72 nodeName:}" failed. No retries permitted until 2026-04-13 20:14:51.589691683 +0000 UTC m=+8.051161108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77gf5" (UniqueName: "kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5") pod "cilium-nhq4z" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72") : configmap "kube-root-ca.crt" not found Apr 13 20:14:51.109567 kubelet[2596]: E0413 20:14:51.109545 2596 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:14:51.109875 kubelet[2596]: E0413 20:14:51.109684 2596 projected.go:196] Error preparing data for projected volume kube-api-access-l4qtz for pod kube-system/kube-proxy-nsf4t: configmap "kube-root-ca.crt" not found Apr 13 20:14:51.109875 kubelet[2596]: E0413 20:14:51.109746 2596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6673f95-0241-4215-9cfc-44a3d2563af3-kube-api-access-l4qtz podName:a6673f95-0241-4215-9cfc-44a3d2563af3 nodeName:}" failed. No retries permitted until 2026-04-13 20:14:51.609722974 +0000 UTC m=+8.071192389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l4qtz" (UniqueName: "kubernetes.io/projected/a6673f95-0241-4215-9cfc-44a3d2563af3-kube-api-access-l4qtz") pod "kube-proxy-nsf4t" (UID: "a6673f95-0241-4215-9cfc-44a3d2563af3") : configmap "kube-root-ca.crt" not found Apr 13 20:14:51.437157 systemd[1]: Created slice kubepods-besteffort-pod652331c7_727f_4c5b_910d_5fecfac339c4.slice - libcontainer container kubepods-besteffort-pod652331c7_727f_4c5b_910d_5fecfac339c4.slice. Apr 13 20:14:51.475829 kubelet[2596]: I0413 20:14:51.475774 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652331c7-727f-4c5b-910d-5fecfac339c4-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-ndrvz\" (UID: \"652331c7-727f-4c5b-910d-5fecfac339c4\") " pod="kube-system/cilium-operator-6f9c7c5859-ndrvz" Apr 13 20:14:51.475829 kubelet[2596]: I0413 20:14:51.475809 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2lng\" (UniqueName: \"kubernetes.io/projected/652331c7-727f-4c5b-910d-5fecfac339c4-kube-api-access-s2lng\") pod \"cilium-operator-6f9c7c5859-ndrvz\" (UID: \"652331c7-727f-4c5b-910d-5fecfac339c4\") " pod="kube-system/cilium-operator-6f9c7c5859-ndrvz" Apr 13 20:14:51.745975 containerd[1528]: time="2026-04-13T20:14:51.745927502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ndrvz,Uid:652331c7-727f-4c5b-910d-5fecfac339c4,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:51.787257 containerd[1528]: time="2026-04-13T20:14:51.786773345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:51.787257 containerd[1528]: time="2026-04-13T20:14:51.786887575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:51.787257 containerd[1528]: time="2026-04-13T20:14:51.786932245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.788721 containerd[1528]: time="2026-04-13T20:14:51.788559435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.818607 systemd[1]: Started cri-containerd-6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64.scope - libcontainer container 6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64. Apr 13 20:14:51.859950 containerd[1528]: time="2026-04-13T20:14:51.859885358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ndrvz,Uid:652331c7-727f-4c5b-910d-5fecfac339c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\"" Apr 13 20:14:51.862233 containerd[1528]: time="2026-04-13T20:14:51.861848419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 20:14:51.872878 containerd[1528]: time="2026-04-13T20:14:51.872844409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsf4t,Uid:a6673f95-0241-4215-9cfc-44a3d2563af3,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:51.881472 containerd[1528]: time="2026-04-13T20:14:51.881379930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhq4z,Uid:2cc02328-06f0-4a98-b1ae-24ff4a044a72,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:51.899127 containerd[1528]: time="2026-04-13T20:14:51.898924691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:51.899127 containerd[1528]: time="2026-04-13T20:14:51.899013401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:51.899779 containerd[1528]: time="2026-04-13T20:14:51.899746292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.900013 containerd[1528]: time="2026-04-13T20:14:51.899950481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.917751 containerd[1528]: time="2026-04-13T20:14:51.917643003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:51.917751 containerd[1528]: time="2026-04-13T20:14:51.917688343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:51.917751 containerd[1528]: time="2026-04-13T20:14:51.917696243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.918736 containerd[1528]: time="2026-04-13T20:14:51.918671512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:51.919544 systemd[1]: Started cri-containerd-2a7671bfd0f1df5401ad476552c839c84b1d579fae95648a9798173c4486ebc4.scope - libcontainer container 2a7671bfd0f1df5401ad476552c839c84b1d579fae95648a9798173c4486ebc4. Apr 13 20:14:51.936536 systemd[1]: Started cri-containerd-c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5.scope - libcontainer container c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5. Apr 13 20:14:51.956558 containerd[1528]: time="2026-04-13T20:14:51.956525305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsf4t,Uid:a6673f95-0241-4215-9cfc-44a3d2563af3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a7671bfd0f1df5401ad476552c839c84b1d579fae95648a9798173c4486ebc4\"" Apr 13 20:14:51.963404 containerd[1528]: time="2026-04-13T20:14:51.963259175Z" level=info msg="CreateContainer within sandbox \"2a7671bfd0f1df5401ad476552c839c84b1d579fae95648a9798173c4486ebc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:14:51.965606 containerd[1528]: time="2026-04-13T20:14:51.965453135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhq4z,Uid:2cc02328-06f0-4a98-b1ae-24ff4a044a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\"" Apr 13 20:14:51.978149 containerd[1528]: time="2026-04-13T20:14:51.978069216Z" level=info msg="CreateContainer within sandbox \"2a7671bfd0f1df5401ad476552c839c84b1d579fae95648a9798173c4486ebc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4eaefb262a3ff10818bcdd7e74ab1101ae627b5ef6145ebaf78e2e01652c3bd\"" Apr 13 20:14:51.978636 containerd[1528]: time="2026-04-13T20:14:51.978613816Z" level=info msg="StartContainer for \"e4eaefb262a3ff10818bcdd7e74ab1101ae627b5ef6145ebaf78e2e01652c3bd\"" Apr 13 20:14:52.006567 systemd[1]: Started cri-containerd-e4eaefb262a3ff10818bcdd7e74ab1101ae627b5ef6145ebaf78e2e01652c3bd.scope - libcontainer container e4eaefb262a3ff10818bcdd7e74ab1101ae627b5ef6145ebaf78e2e01652c3bd. Apr 13 20:14:52.032533 containerd[1528]: time="2026-04-13T20:14:52.032485980Z" level=info msg="StartContainer for \"e4eaefb262a3ff10818bcdd7e74ab1101ae627b5ef6145ebaf78e2e01652c3bd\" returns successfully" Apr 13 20:14:53.380626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215085809.mount: Deactivated successfully. Apr 13 20:14:53.753817 containerd[1528]: time="2026-04-13T20:14:53.753771650Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:53.754760 containerd[1528]: time="2026-04-13T20:14:53.754629911Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 13 20:14:53.755376 containerd[1528]: time="2026-04-13T20:14:53.755267640Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:53.756807 containerd[1528]: time="2026-04-13T20:14:53.756197281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.894314832s" Apr 13 20:14:53.756807 containerd[1528]: time="2026-04-13T20:14:53.756232250Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 13 20:14:53.757097 containerd[1528]: time="2026-04-13T20:14:53.757084701Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 20:14:53.759921 containerd[1528]: time="2026-04-13T20:14:53.759897131Z" level=info msg="CreateContainer within sandbox \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 20:14:53.775468 containerd[1528]: time="2026-04-13T20:14:53.775383573Z" level=info msg="CreateContainer within sandbox \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\"" Apr 13 20:14:53.775921 containerd[1528]: time="2026-04-13T20:14:53.775831364Z" level=info msg="StartContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\"" Apr 13 20:14:53.807558 systemd[1]: Started cri-containerd-9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e.scope - libcontainer container 9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e. Apr 13 20:14:53.829500 containerd[1528]: time="2026-04-13T20:14:53.829446242Z" level=info msg="StartContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" returns successfully" Apr 13 20:14:54.001162 kubelet[2596]: I0413 20:14:54.001117 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nsf4t" podStartSLOduration=4.001103438 podStartE2EDuration="4.001103438s" podCreationTimestamp="2026-04-13 20:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:14:52.691657121 +0000 UTC m=+9.153126566" watchObservedRunningTime="2026-04-13 20:14:54.001103438 +0000 UTC m=+10.462572853" Apr 13 20:14:54.708792 kubelet[2596]: I0413 20:14:54.708624 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-ndrvz" podStartSLOduration=1.813213075 podStartE2EDuration="3.708604457s" podCreationTimestamp="2026-04-13 20:14:51 +0000 UTC" firstStartedPulling="2026-04-13 20:14:51.861543429 +0000 UTC m=+8.323012834" lastFinishedPulling="2026-04-13 20:14:53.756934801 +0000 UTC m=+10.218404216" observedRunningTime="2026-04-13 20:14:54.699950365 +0000 UTC m=+11.161419810" watchObservedRunningTime="2026-04-13 20:14:54.708604457 +0000 UTC m=+11.170073902" Apr 13 20:14:55.748510 update_engine[1507]: I20260413 20:14:55.748443 1507 update_attempter.cc:509] Updating boot flags... Apr 13 20:14:55.806451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3035) Apr 13 20:14:55.887447 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3039) Apr 13 20:14:55.965544 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3039) Apr 13 20:14:57.265645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564652928.mount: Deactivated successfully. Apr 13 20:14:58.585896 containerd[1528]: time="2026-04-13T20:14:58.585815119Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:58.587566 containerd[1528]: time="2026-04-13T20:14:58.587527879Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 13 20:14:58.588665 containerd[1528]: time="2026-04-13T20:14:58.588594859Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:14:58.589720 containerd[1528]: time="2026-04-13T20:14:58.589698920Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.832538199s" Apr 13 20:14:58.589760 containerd[1528]: time="2026-04-13T20:14:58.589723650Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 13 20:14:58.593463 containerd[1528]: time="2026-04-13T20:14:58.593436041Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:14:58.602475 containerd[1528]: time="2026-04-13T20:14:58.602299204Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\"" Apr 13 20:14:58.602829 containerd[1528]: time="2026-04-13T20:14:58.602769254Z" level=info msg="StartContainer for \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\"" Apr 13 20:14:58.624515 systemd[1]: Started cri-containerd-b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e.scope - libcontainer container b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e. Apr 13 20:14:58.643193 containerd[1528]: time="2026-04-13T20:14:58.642957748Z" level=info msg="StartContainer for \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\" returns successfully" Apr 13 20:14:58.653734 systemd[1]: cri-containerd-b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e.scope: Deactivated successfully. Apr 13 20:14:58.720852 containerd[1528]: time="2026-04-13T20:14:58.720788244Z" level=info msg="shim disconnected" id=b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e namespace=k8s.io Apr 13 20:14:58.720852 containerd[1528]: time="2026-04-13T20:14:58.720835904Z" level=warning msg="cleaning up after shim disconnected" id=b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e namespace=k8s.io Apr 13 20:14:58.720852 containerd[1528]: time="2026-04-13T20:14:58.720846184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:59.602115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e-rootfs.mount: Deactivated successfully. Apr 13 20:14:59.704859 containerd[1528]: time="2026-04-13T20:14:59.704641312Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:14:59.722436 containerd[1528]: time="2026-04-13T20:14:59.721373128Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\"" Apr 13 20:14:59.722436 containerd[1528]: time="2026-04-13T20:14:59.721939358Z" level=info msg="StartContainer for \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\"" Apr 13 20:14:59.751530 systemd[1]: Started cri-containerd-a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8.scope - libcontainer container a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8. Apr 13 20:14:59.771473 containerd[1528]: time="2026-04-13T20:14:59.771047456Z" level=info msg="StartContainer for \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\" returns successfully" Apr 13 20:14:59.780522 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:14:59.780697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:14:59.780761 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:14:59.783984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:14:59.784836 systemd[1]: cri-containerd-a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8.scope: Deactivated successfully. Apr 13 20:14:59.807477 containerd[1528]: time="2026-04-13T20:14:59.806938550Z" level=info msg="shim disconnected" id=a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8 namespace=k8s.io Apr 13 20:14:59.807477 containerd[1528]: time="2026-04-13T20:14:59.806977850Z" level=warning msg="cleaning up after shim disconnected" id=a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8 namespace=k8s.io Apr 13 20:14:59.807477 containerd[1528]: time="2026-04-13T20:14:59.806984600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:59.807173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:15:00.601073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8-rootfs.mount: Deactivated successfully. Apr 13 20:15:00.711124 containerd[1528]: time="2026-04-13T20:15:00.711038766Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:15:00.737110 containerd[1528]: time="2026-04-13T20:15:00.737044117Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\"" Apr 13 20:15:00.740525 containerd[1528]: time="2026-04-13T20:15:00.738836717Z" level=info msg="StartContainer for \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\"" Apr 13 20:15:00.783640 systemd[1]: Started cri-containerd-2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9.scope - libcontainer container 2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9. Apr 13 20:15:00.812794 containerd[1528]: time="2026-04-13T20:15:00.812742007Z" level=info msg="StartContainer for \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\" returns successfully" Apr 13 20:15:00.818292 systemd[1]: cri-containerd-2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9.scope: Deactivated successfully. Apr 13 20:15:00.834869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9-rootfs.mount: Deactivated successfully. Apr 13 20:15:00.840051 containerd[1528]: time="2026-04-13T20:15:00.839849687Z" level=info msg="shim disconnected" id=2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9 namespace=k8s.io Apr 13 20:15:00.840051 containerd[1528]: time="2026-04-13T20:15:00.839911937Z" level=warning msg="cleaning up after shim disconnected" id=2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9 namespace=k8s.io Apr 13 20:15:00.840051 containerd[1528]: time="2026-04-13T20:15:00.839921597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:15:01.710874 containerd[1528]: time="2026-04-13T20:15:01.710734006Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:15:01.727032 containerd[1528]: time="2026-04-13T20:15:01.726988812Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\"" Apr 13 20:15:01.727948 containerd[1528]: time="2026-04-13T20:15:01.727915863Z" level=info msg="StartContainer for \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\"" Apr 13 20:15:01.756743 systemd[1]: Started cri-containerd-aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5.scope - libcontainer container aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5. Apr 13 20:15:01.781510 systemd[1]: cri-containerd-aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5.scope: Deactivated successfully. Apr 13 20:15:01.782022 containerd[1528]: time="2026-04-13T20:15:01.781935856Z" level=info msg="StartContainer for \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\" returns successfully" Apr 13 20:15:01.803241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5-rootfs.mount: Deactivated successfully. Apr 13 20:15:01.812940 containerd[1528]: time="2026-04-13T20:15:01.812877219Z" level=info msg="shim disconnected" id=aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5 namespace=k8s.io Apr 13 20:15:01.813144 containerd[1528]: time="2026-04-13T20:15:01.812951929Z" level=warning msg="cleaning up after shim disconnected" id=aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5 namespace=k8s.io Apr 13 20:15:01.813144 containerd[1528]: time="2026-04-13T20:15:01.812962169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:15:02.718042 containerd[1528]: time="2026-04-13T20:15:02.717900643Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:15:02.739035 containerd[1528]: time="2026-04-13T20:15:02.738978672Z" level=info msg="CreateContainer within sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\"" Apr 13 20:15:02.740722 containerd[1528]: time="2026-04-13T20:15:02.739972523Z" level=info msg="StartContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\"" Apr 13 20:15:02.772559 systemd[1]: Started cri-containerd-0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9.scope - libcontainer container 0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9. Apr 13 20:15:02.798240 containerd[1528]: time="2026-04-13T20:15:02.797892340Z" level=info msg="StartContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" returns successfully" Apr 13 20:15:02.912452 kubelet[2596]: I0413 20:15:02.911323 2596 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 20:15:02.943400 systemd[1]: Created slice kubepods-burstable-pod961b9186_9ae5_4e1e_9162_f6df3101d66e.slice - libcontainer container kubepods-burstable-pod961b9186_9ae5_4e1e_9162_f6df3101d66e.slice. Apr 13 20:15:02.950111 systemd[1]: Created slice kubepods-burstable-poda45e4895_f8c1_4f05_b195_28ee49b4a7b4.slice - libcontainer container kubepods-burstable-poda45e4895_f8c1_4f05_b195_28ee49b4a7b4.slice. Apr 13 20:15:02.955471 kubelet[2596]: I0413 20:15:02.955306 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961b9186-9ae5-4e1e-9162-f6df3101d66e-config-volume\") pod \"coredns-66bc5c9577-6hzg5\" (UID: \"961b9186-9ae5-4e1e-9162-f6df3101d66e\") " pod="kube-system/coredns-66bc5c9577-6hzg5" Apr 13 20:15:02.955471 kubelet[2596]: I0413 20:15:02.955331 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5pn\" (UniqueName: \"kubernetes.io/projected/a45e4895-f8c1-4f05-b195-28ee49b4a7b4-kube-api-access-kw5pn\") pod \"coredns-66bc5c9577-dg8wp\" (UID: \"a45e4895-f8c1-4f05-b195-28ee49b4a7b4\") " pod="kube-system/coredns-66bc5c9577-dg8wp" Apr 13 20:15:02.955471 kubelet[2596]: I0413 20:15:02.955344 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a45e4895-f8c1-4f05-b195-28ee49b4a7b4-config-volume\") pod \"coredns-66bc5c9577-dg8wp\" (UID: \"a45e4895-f8c1-4f05-b195-28ee49b4a7b4\") " pod="kube-system/coredns-66bc5c9577-dg8wp" Apr 13 20:15:02.955471 kubelet[2596]: I0413 20:15:02.955355 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pb6\" (UniqueName: \"kubernetes.io/projected/961b9186-9ae5-4e1e-9162-f6df3101d66e-kube-api-access-92pb6\") pod \"coredns-66bc5c9577-6hzg5\" (UID: \"961b9186-9ae5-4e1e-9162-f6df3101d66e\") " pod="kube-system/coredns-66bc5c9577-6hzg5" Apr 13 20:15:03.249264 containerd[1528]: time="2026-04-13T20:15:03.249214290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6hzg5,Uid:961b9186-9ae5-4e1e-9162-f6df3101d66e,Namespace:kube-system,Attempt:0,}" Apr 13 20:15:03.255661 containerd[1528]: time="2026-04-13T20:15:03.255538633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dg8wp,Uid:a45e4895-f8c1-4f05-b195-28ee49b4a7b4,Namespace:kube-system,Attempt:0,}" Apr 13 20:15:03.739473 kubelet[2596]: I0413 20:15:03.738764 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nhq4z" podStartSLOduration=7.1151556 podStartE2EDuration="13.738750733s" podCreationTimestamp="2026-04-13 20:14:50 +0000 UTC" firstStartedPulling="2026-04-13 20:14:51.966896916 +0000 UTC m=+8.428366321" lastFinishedPulling="2026-04-13 20:14:58.590492049 +0000 UTC m=+15.051961454" observedRunningTime="2026-04-13 20:15:03.736993603 +0000 UTC m=+20.198463018" watchObservedRunningTime="2026-04-13 20:15:03.738750733 +0000 UTC m=+20.200220138" Apr 13 20:15:04.914690 systemd-networkd[1409]: cilium_host: Link UP Apr 13 20:15:04.914832 systemd-networkd[1409]: cilium_net: Link UP Apr 13 20:15:04.914991 systemd-networkd[1409]: cilium_net: Gained carrier Apr 13 20:15:04.915145 systemd-networkd[1409]: cilium_host: Gained carrier Apr 13 20:15:05.056498 systemd-networkd[1409]: cilium_vxlan: Link UP Apr 13 20:15:05.056511 systemd-networkd[1409]: cilium_vxlan: Gained carrier Apr 13 20:15:05.081586 systemd-networkd[1409]: cilium_host: Gained IPv6LL Apr 13 20:15:05.097615 systemd-networkd[1409]: cilium_net: Gained IPv6LL Apr 13 20:15:05.238546 kernel: NET: Registered PF_ALG protocol family Apr 13 20:15:05.850223 systemd-networkd[1409]: lxc_health: Link UP Apr 13 20:15:05.854540 systemd-networkd[1409]: lxc_health: Gained carrier Apr 13 20:15:06.282527 systemd-networkd[1409]: lxc8a3a6a8968f4: Link UP Apr 13 20:15:06.288688 kernel: eth0: renamed from tmp933e0 Apr 13 20:15:06.295459 systemd-networkd[1409]: lxc8a3a6a8968f4: Gained carrier Apr 13 20:15:06.306597 systemd-networkd[1409]: lxc5ff8de7c9040: Link UP Apr 13 20:15:06.312542 kernel: eth0: renamed from tmp71c0a Apr 13 20:15:06.322205 systemd-networkd[1409]: lxc5ff8de7c9040: Gained carrier Apr 13 20:15:06.537552 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Apr 13 20:15:07.177722 systemd-networkd[1409]: lxc_health: Gained IPv6LL Apr 13 20:15:07.881799 systemd-networkd[1409]: lxc5ff8de7c9040: Gained IPv6LL Apr 13 20:15:08.267534 systemd-networkd[1409]: lxc8a3a6a8968f4: Gained IPv6LL Apr 13 20:15:08.814197 containerd[1528]: time="2026-04-13T20:15:08.813645020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:15:08.816476 containerd[1528]: time="2026-04-13T20:15:08.814165681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:15:08.816476 containerd[1528]: time="2026-04-13T20:15:08.814289741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:15:08.816476 containerd[1528]: time="2026-04-13T20:15:08.814387511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:15:08.837832 systemd[1]: Started cri-containerd-71c0aaf2da40d8cb600f35033a2893d0a0d090cb2f12f7eea87a773f98081cac.scope - libcontainer container 71c0aaf2da40d8cb600f35033a2893d0a0d090cb2f12f7eea87a773f98081cac. Apr 13 20:15:08.860674 containerd[1528]: time="2026-04-13T20:15:08.860598787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:15:08.860850 containerd[1528]: time="2026-04-13T20:15:08.860688587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:15:08.860850 containerd[1528]: time="2026-04-13T20:15:08.860700217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:15:08.860850 containerd[1528]: time="2026-04-13T20:15:08.860768537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:15:08.888502 containerd[1528]: time="2026-04-13T20:15:08.888474173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dg8wp,Uid:a45e4895-f8c1-4f05-b195-28ee49b4a7b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"71c0aaf2da40d8cb600f35033a2893d0a0d090cb2f12f7eea87a773f98081cac\"" Apr 13 20:15:08.892033 systemd[1]: Started cri-containerd-933e0164806f01e02ae97aa863b817d239387c39d2350bb862a945d8be4573b9.scope - libcontainer container 933e0164806f01e02ae97aa863b817d239387c39d2350bb862a945d8be4573b9. Apr 13 20:15:08.895433 containerd[1528]: time="2026-04-13T20:15:08.895392477Z" level=info msg="CreateContainer within sandbox \"71c0aaf2da40d8cb600f35033a2893d0a0d090cb2f12f7eea87a773f98081cac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:15:08.911463 containerd[1528]: time="2026-04-13T20:15:08.911377516Z" level=info msg="CreateContainer within sandbox \"71c0aaf2da40d8cb600f35033a2893d0a0d090cb2f12f7eea87a773f98081cac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e830475788edeac73fe6ee5a09016bcfea439630d2b40b9a00c27aa5be504d22\"" Apr 13 20:15:08.914250 containerd[1528]: time="2026-04-13T20:15:08.912960248Z" level=info msg="StartContainer for \"e830475788edeac73fe6ee5a09016bcfea439630d2b40b9a00c27aa5be504d22\"" Apr 13 20:15:08.953259 containerd[1528]: time="2026-04-13T20:15:08.952596220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6hzg5,Uid:961b9186-9ae5-4e1e-9162-f6df3101d66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"933e0164806f01e02ae97aa863b817d239387c39d2350bb862a945d8be4573b9\"" Apr 13 20:15:08.954558 systemd[1]: Started cri-containerd-e830475788edeac73fe6ee5a09016bcfea439630d2b40b9a00c27aa5be504d22.scope - libcontainer container e830475788edeac73fe6ee5a09016bcfea439630d2b40b9a00c27aa5be504d22. Apr 13 20:15:08.958858 containerd[1528]: time="2026-04-13T20:15:08.958690294Z" level=info msg="CreateContainer within sandbox \"933e0164806f01e02ae97aa863b817d239387c39d2350bb862a945d8be4573b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:15:08.974699 containerd[1528]: time="2026-04-13T20:15:08.974401823Z" level=info msg="CreateContainer within sandbox \"933e0164806f01e02ae97aa863b817d239387c39d2350bb862a945d8be4573b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5a26bbca86532051467fa63584a60831acf9e95525626ebb95b330a84149fab\"" Apr 13 20:15:08.977468 containerd[1528]: time="2026-04-13T20:15:08.977448505Z" level=info msg="StartContainer for \"e5a26bbca86532051467fa63584a60831acf9e95525626ebb95b330a84149fab\"" Apr 13 20:15:08.997146 containerd[1528]: time="2026-04-13T20:15:08.997067746Z" level=info msg="StartContainer for \"e830475788edeac73fe6ee5a09016bcfea439630d2b40b9a00c27aa5be504d22\" returns successfully" Apr 13 20:15:09.015535 systemd[1]: Started cri-containerd-e5a26bbca86532051467fa63584a60831acf9e95525626ebb95b330a84149fab.scope - libcontainer container e5a26bbca86532051467fa63584a60831acf9e95525626ebb95b330a84149fab. Apr 13 20:15:09.048123 containerd[1528]: time="2026-04-13T20:15:09.048086465Z" level=info msg="StartContainer for \"e5a26bbca86532051467fa63584a60831acf9e95525626ebb95b330a84149fab\" returns successfully" Apr 13 20:15:09.748069 kubelet[2596]: I0413 20:15:09.747464 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dg8wp" podStartSLOduration=18.747405689 podStartE2EDuration="18.747405689s" podCreationTimestamp="2026-04-13 20:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:15:09.747127779 +0000 UTC m=+26.208597194" watchObservedRunningTime="2026-04-13 20:15:09.747405689 +0000 UTC m=+26.208875124" Apr 13 20:15:09.776532 kubelet[2596]: I0413 20:15:09.776331 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6hzg5" podStartSLOduration=18.776312696 podStartE2EDuration="18.776312696s" podCreationTimestamp="2026-04-13 20:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:15:09.761719617 +0000 UTC m=+26.223189052" watchObservedRunningTime="2026-04-13 20:15:09.776312696 +0000 UTC m=+26.237782131" Apr 13 20:15:09.818365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870255574.mount: Deactivated successfully. Apr 13 20:16:08.144829 systemd[1]: Started sshd@9-204.168.245.167:22-20.229.252.112:57240.service - OpenSSH per-connection server daemon (20.229.252.112:57240). Apr 13 20:16:08.369095 sshd[4004]: Accepted publickey for core from 20.229.252.112 port 57240 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:08.370539 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:08.377517 systemd-logind[1506]: New session 10 of user core. Apr 13 20:16:08.384698 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:16:08.598738 sshd[4004]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:08.602944 systemd[1]: sshd@9-204.168.245.167:22-20.229.252.112:57240.service: Deactivated successfully. Apr 13 20:16:08.604683 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:16:08.605231 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:16:08.606082 systemd-logind[1506]: Removed session 10. Apr 13 20:16:13.649911 systemd[1]: Started sshd@10-204.168.245.167:22-20.229.252.112:57248.service - OpenSSH per-connection server daemon (20.229.252.112:57248). Apr 13 20:16:13.869685 sshd[4019]: Accepted publickey for core from 20.229.252.112 port 57248 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:13.871158 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:13.877505 systemd-logind[1506]: New session 11 of user core. Apr 13 20:16:13.879553 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:16:14.089748 sshd[4019]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:14.093951 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:16:14.094848 systemd[1]: sshd@10-204.168.245.167:22-20.229.252.112:57248.service: Deactivated successfully. Apr 13 20:16:14.097220 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:16:14.098754 systemd-logind[1506]: Removed session 11. Apr 13 20:16:19.136799 systemd[1]: Started sshd@11-204.168.245.167:22-20.229.252.112:44074.service - OpenSSH per-connection server daemon (20.229.252.112:44074). Apr 13 20:16:19.340655 sshd[4034]: Accepted publickey for core from 20.229.252.112 port 44074 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:19.343402 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:19.352394 systemd-logind[1506]: New session 12 of user core. Apr 13 20:16:19.357688 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:16:19.589131 sshd[4034]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:19.594243 systemd[1]: sshd@11-204.168.245.167:22-20.229.252.112:44074.service: Deactivated successfully. Apr 13 20:16:19.597818 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:16:19.600520 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:16:19.601829 systemd-logind[1506]: Removed session 12. Apr 13 20:16:24.642853 systemd[1]: Started sshd@12-204.168.245.167:22-20.229.252.112:44076.service - OpenSSH per-connection server daemon (20.229.252.112:44076). Apr 13 20:16:24.874455 sshd[4050]: Accepted publickey for core from 20.229.252.112 port 44076 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:24.876625 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:24.883654 systemd-logind[1506]: New session 13 of user core. Apr 13 20:16:24.889578 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:16:25.092630 sshd[4050]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:25.095092 systemd[1]: sshd@12-204.168.245.167:22-20.229.252.112:44076.service: Deactivated successfully. Apr 13 20:16:25.096944 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:16:25.098086 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:16:25.099285 systemd-logind[1506]: Removed session 13. Apr 13 20:16:25.145285 systemd[1]: Started sshd@13-204.168.245.167:22-20.229.252.112:49930.service - OpenSSH per-connection server daemon (20.229.252.112:49930). Apr 13 20:16:25.368591 sshd[4064]: Accepted publickey for core from 20.229.252.112 port 49930 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:25.370656 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:25.378513 systemd-logind[1506]: New session 14 of user core. Apr 13 20:16:25.386631 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:16:25.667128 sshd[4064]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:25.671570 systemd[1]: sshd@13-204.168.245.167:22-20.229.252.112:49930.service: Deactivated successfully. Apr 13 20:16:25.673940 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:16:25.675692 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:16:25.676870 systemd-logind[1506]: Removed session 14. Apr 13 20:16:25.703308 systemd[1]: Started sshd@14-204.168.245.167:22-20.229.252.112:49934.service - OpenSSH per-connection server daemon (20.229.252.112:49934). Apr 13 20:16:25.909193 sshd[4075]: Accepted publickey for core from 20.229.252.112 port 49934 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:25.912512 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:25.921160 systemd-logind[1506]: New session 15 of user core. Apr 13 20:16:25.926711 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:16:26.170711 sshd[4075]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:26.178006 systemd[1]: sshd@14-204.168.245.167:22-20.229.252.112:49934.service: Deactivated successfully. Apr 13 20:16:26.182066 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:16:26.183534 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:16:26.186127 systemd-logind[1506]: Removed session 15. Apr 13 20:16:31.220814 systemd[1]: Started sshd@15-204.168.245.167:22-20.229.252.112:49946.service - OpenSSH per-connection server daemon (20.229.252.112:49946). Apr 13 20:16:31.433539 sshd[4088]: Accepted publickey for core from 20.229.252.112 port 49946 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:31.436376 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:31.444479 systemd-logind[1506]: New session 16 of user core. Apr 13 20:16:31.452567 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:16:31.636112 sshd[4088]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:31.641854 systemd[1]: sshd@15-204.168.245.167:22-20.229.252.112:49946.service: Deactivated successfully. Apr 13 20:16:31.646266 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:16:31.647276 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:16:31.648919 systemd-logind[1506]: Removed session 16. Apr 13 20:16:31.678320 systemd[1]: Started sshd@16-204.168.245.167:22-20.229.252.112:49960.service - OpenSSH per-connection server daemon (20.229.252.112:49960). Apr 13 20:16:31.894546 sshd[4101]: Accepted publickey for core from 20.229.252.112 port 49960 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:31.896041 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:31.905519 systemd-logind[1506]: New session 17 of user core. Apr 13 20:16:31.917686 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:16:32.183891 sshd[4101]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:32.189129 systemd[1]: sshd@16-204.168.245.167:22-20.229.252.112:49960.service: Deactivated successfully. Apr 13 20:16:32.193141 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:16:32.196142 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:16:32.198261 systemd-logind[1506]: Removed session 17. Apr 13 20:16:32.228870 systemd[1]: Started sshd@17-204.168.245.167:22-20.229.252.112:49976.service - OpenSSH per-connection server daemon (20.229.252.112:49976). Apr 13 20:16:32.451642 sshd[4111]: Accepted publickey for core from 20.229.252.112 port 49976 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:32.453559 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:32.460746 systemd-logind[1506]: New session 18 of user core. Apr 13 20:16:32.477441 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:16:33.232765 sshd[4111]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:33.235477 systemd[1]: sshd@17-204.168.245.167:22-20.229.252.112:49976.service: Deactivated successfully. Apr 13 20:16:33.237353 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:16:33.238642 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:16:33.239858 systemd-logind[1506]: Removed session 18. Apr 13 20:16:33.270176 systemd[1]: Started sshd@18-204.168.245.167:22-20.229.252.112:49984.service - OpenSSH per-connection server daemon (20.229.252.112:49984). Apr 13 20:16:33.476789 sshd[4127]: Accepted publickey for core from 20.229.252.112 port 49984 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:33.479579 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:33.486632 systemd-logind[1506]: New session 19 of user core. Apr 13 20:16:33.491987 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:16:33.796989 sshd[4127]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:33.800453 systemd[1]: sshd@18-204.168.245.167:22-20.229.252.112:49984.service: Deactivated successfully. Apr 13 20:16:33.802604 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:16:33.803287 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:16:33.804208 systemd-logind[1506]: Removed session 19. Apr 13 20:16:33.838468 systemd[1]: Started sshd@19-204.168.245.167:22-20.229.252.112:49996.service - OpenSSH per-connection server daemon (20.229.252.112:49996). Apr 13 20:16:34.054983 sshd[4138]: Accepted publickey for core from 20.229.252.112 port 49996 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:34.057987 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:34.067225 systemd-logind[1506]: New session 20 of user core. Apr 13 20:16:34.072688 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:16:34.296031 sshd[4138]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:34.299489 systemd[1]: sshd@19-204.168.245.167:22-20.229.252.112:49996.service: Deactivated successfully. Apr 13 20:16:34.301760 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:16:34.302443 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:16:34.303386 systemd-logind[1506]: Removed session 20. Apr 13 20:16:39.341698 systemd[1]: Started sshd@20-204.168.245.167:22-20.229.252.112:38812.service - OpenSSH per-connection server daemon (20.229.252.112:38812). Apr 13 20:16:39.549157 sshd[4155]: Accepted publickey for core from 20.229.252.112 port 38812 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:39.550832 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:39.557792 systemd-logind[1506]: New session 21 of user core. Apr 13 20:16:39.564651 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:16:39.778754 sshd[4155]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:39.781787 systemd[1]: sshd@20-204.168.245.167:22-20.229.252.112:38812.service: Deactivated successfully. Apr 13 20:16:39.783244 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:16:39.784470 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:16:39.785237 systemd-logind[1506]: Removed session 21. Apr 13 20:16:44.822325 systemd[1]: Started sshd@21-204.168.245.167:22-20.229.252.112:38822.service - OpenSSH per-connection server daemon (20.229.252.112:38822). Apr 13 20:16:45.045775 sshd[4171]: Accepted publickey for core from 20.229.252.112 port 38822 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:45.047022 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:45.052314 systemd-logind[1506]: New session 22 of user core. Apr 13 20:16:45.064582 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:16:45.255307 sshd[4171]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:45.259233 systemd-logind[1506]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:16:45.259469 systemd[1]: sshd@21-204.168.245.167:22-20.229.252.112:38822.service: Deactivated successfully. Apr 13 20:16:45.261321 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:16:45.262272 systemd-logind[1506]: Removed session 22. Apr 13 20:16:45.302807 systemd[1]: Started sshd@22-204.168.245.167:22-20.229.252.112:37162.service - OpenSSH per-connection server daemon (20.229.252.112:37162). Apr 13 20:16:45.529355 sshd[4184]: Accepted publickey for core from 20.229.252.112 port 37162 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:45.532408 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:45.539998 systemd-logind[1506]: New session 23 of user core. Apr 13 20:16:45.551664 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:16:47.018472 containerd[1528]: time="2026-04-13T20:16:47.016748839Z" level=info msg="StopContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" with timeout 30 (s)" Apr 13 20:16:47.019821 containerd[1528]: time="2026-04-13T20:16:47.019718264Z" level=info msg="Stop container \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" with signal terminated" Apr 13 20:16:47.020970 systemd[1]: run-containerd-runc-k8s.io-0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9-runc.nWhhz7.mount: Deactivated successfully. Apr 13 20:16:47.028801 containerd[1528]: time="2026-04-13T20:16:47.028776884Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:16:47.039298 systemd[1]: cri-containerd-9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e.scope: Deactivated successfully. Apr 13 20:16:47.041887 containerd[1528]: time="2026-04-13T20:16:47.041865828Z" level=info msg="StopContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" with timeout 2 (s)" Apr 13 20:16:47.042177 containerd[1528]: time="2026-04-13T20:16:47.042158707Z" level=info msg="Stop container \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" with signal terminated" Apr 13 20:16:47.055173 systemd-networkd[1409]: lxc_health: Link DOWN Apr 13 20:16:47.055972 systemd-networkd[1409]: lxc_health: Lost carrier Apr 13 20:16:47.072398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e-rootfs.mount: Deactivated successfully. Apr 13 20:16:47.075453 systemd[1]: cri-containerd-0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9.scope: Deactivated successfully. Apr 13 20:16:47.076040 systemd[1]: cri-containerd-0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9.scope: Consumed 5.464s CPU time. Apr 13 20:16:47.087146 containerd[1528]: time="2026-04-13T20:16:47.086995986Z" level=info msg="shim disconnected" id=9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e namespace=k8s.io Apr 13 20:16:47.087146 containerd[1528]: time="2026-04-13T20:16:47.087036586Z" level=warning msg="cleaning up after shim disconnected" id=9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e namespace=k8s.io Apr 13 20:16:47.087146 containerd[1528]: time="2026-04-13T20:16:47.087043956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:47.096693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9-rootfs.mount: Deactivated successfully. Apr 13 20:16:47.104900 containerd[1528]: time="2026-04-13T20:16:47.104812990Z" level=info msg="StopContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" returns successfully" Apr 13 20:16:47.105436 containerd[1528]: time="2026-04-13T20:16:47.105201189Z" level=info msg="shim disconnected" id=0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9 namespace=k8s.io Apr 13 20:16:47.105642 containerd[1528]: time="2026-04-13T20:16:47.105232779Z" level=warning msg="cleaning up after shim disconnected" id=0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9 namespace=k8s.io Apr 13 20:16:47.105642 containerd[1528]: time="2026-04-13T20:16:47.105513058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:47.105642 containerd[1528]: time="2026-04-13T20:16:47.105584998Z" level=info msg="StopPodSandbox for \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\"" Apr 13 20:16:47.105642 containerd[1528]: time="2026-04-13T20:16:47.105610568Z" level=info msg="Container to stop \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.108021 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64-shm.mount: Deactivated successfully. Apr 13 20:16:47.115021 systemd[1]: cri-containerd-6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64.scope: Deactivated successfully. Apr 13 20:16:47.123865 containerd[1528]: time="2026-04-13T20:16:47.123830080Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:16:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:16:47.126077 containerd[1528]: time="2026-04-13T20:16:47.126052806Z" level=info msg="StopContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" returns successfully" Apr 13 20:16:47.126462 containerd[1528]: time="2026-04-13T20:16:47.126400206Z" level=info msg="StopPodSandbox for \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\"" Apr 13 20:16:47.126537 containerd[1528]: time="2026-04-13T20:16:47.126518956Z" level=info msg="Container to stop \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.126537 containerd[1528]: time="2026-04-13T20:16:47.126532476Z" level=info msg="Container to stop \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.126589 containerd[1528]: time="2026-04-13T20:16:47.126540366Z" level=info msg="Container to stop \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.126589 containerd[1528]: time="2026-04-13T20:16:47.126547236Z" level=info msg="Container to stop \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.126589 containerd[1528]: time="2026-04-13T20:16:47.126554056Z" level=info msg="Container to stop \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:16:47.132319 systemd[1]: cri-containerd-c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5.scope: Deactivated successfully. Apr 13 20:16:47.145297 containerd[1528]: time="2026-04-13T20:16:47.145251057Z" level=info msg="shim disconnected" id=6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64 namespace=k8s.io Apr 13 20:16:47.145297 containerd[1528]: time="2026-04-13T20:16:47.145293267Z" level=warning msg="cleaning up after shim disconnected" id=6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64 namespace=k8s.io Apr 13 20:16:47.145297 containerd[1528]: time="2026-04-13T20:16:47.145300417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:47.158632 containerd[1528]: time="2026-04-13T20:16:47.158584740Z" level=info msg="shim disconnected" id=c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5 namespace=k8s.io Apr 13 20:16:47.158632 containerd[1528]: time="2026-04-13T20:16:47.158623479Z" level=warning msg="cleaning up after shim disconnected" id=c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5 namespace=k8s.io Apr 13 20:16:47.158632 containerd[1528]: time="2026-04-13T20:16:47.158630179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:47.160953 containerd[1528]: time="2026-04-13T20:16:47.160831215Z" level=info msg="TearDown network for sandbox \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\" successfully" Apr 13 20:16:47.160953 containerd[1528]: time="2026-04-13T20:16:47.160852775Z" level=info msg="StopPodSandbox for \"6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64\" returns successfully" Apr 13 20:16:47.178210 containerd[1528]: time="2026-04-13T20:16:47.178165620Z" level=info msg="TearDown network for sandbox \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" successfully" Apr 13 20:16:47.178210 containerd[1528]: time="2026-04-13T20:16:47.178190600Z" level=info msg="StopPodSandbox for \"c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5\" returns successfully" Apr 13 20:16:47.245861 kubelet[2596]: I0413 20:16:47.245819 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-etc-cni-netd\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.245922 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hubble-tls\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.245983 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-config-path\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.246015 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-run\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.246043 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-cgroup\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.246064 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-net\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246257 kubelet[2596]: I0413 20:16:47.246085 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hostproc\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246106 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cni-path\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246131 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-lib-modules\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246147 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-bpf-maps\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246166 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-xtables-lock\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246185 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652331c7-727f-4c5b-910d-5fecfac339c4-cilium-config-path\") pod \"652331c7-727f-4c5b-910d-5fecfac339c4\" (UID: \"652331c7-727f-4c5b-910d-5fecfac339c4\") " Apr 13 20:16:47.246381 kubelet[2596]: I0413 20:16:47.246205 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cc02328-06f0-4a98-b1ae-24ff4a044a72-clustermesh-secrets\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246582 kubelet[2596]: I0413 20:16:47.246224 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2lng\" (UniqueName: \"kubernetes.io/projected/652331c7-727f-4c5b-910d-5fecfac339c4-kube-api-access-s2lng\") pod \"652331c7-727f-4c5b-910d-5fecfac339c4\" (UID: \"652331c7-727f-4c5b-910d-5fecfac339c4\") " Apr 13 20:16:47.246582 kubelet[2596]: I0413 20:16:47.246243 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-kernel\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.246582 kubelet[2596]: I0413 20:16:47.246264 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5\") pod \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\" (UID: \"2cc02328-06f0-4a98-b1ae-24ff4a044a72\") " Apr 13 20:16:47.248455 kubelet[2596]: I0413 20:16:47.246659 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.248455 kubelet[2596]: I0413 20:16:47.246696 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cni-path" (OuterVolumeSpecName: "cni-path") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.248707 kubelet[2596]: I0413 20:16:47.248666 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.248762 kubelet[2596]: I0413 20:16:47.248738 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.248802 kubelet[2596]: I0413 20:16:47.248761 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.250586 kubelet[2596]: I0413 20:16:47.250557 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.250646 kubelet[2596]: I0413 20:16:47.250599 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.250667 kubelet[2596]: I0413 20:16:47.250654 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.250704 kubelet[2596]: I0413 20:16:47.250675 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hostproc" (OuterVolumeSpecName: "hostproc") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.251119 kubelet[2596]: I0413 20:16:47.251106 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:16:47.251591 kubelet[2596]: I0413 20:16:47.251563 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:16:47.254112 kubelet[2596]: I0413 20:16:47.254044 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:16:47.256474 kubelet[2596]: I0413 20:16:47.256455 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc02328-06f0-4a98-b1ae-24ff4a044a72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:16:47.257056 kubelet[2596]: I0413 20:16:47.257030 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652331c7-727f-4c5b-910d-5fecfac339c4-kube-api-access-s2lng" (OuterVolumeSpecName: "kube-api-access-s2lng") pod "652331c7-727f-4c5b-910d-5fecfac339c4" (UID: "652331c7-727f-4c5b-910d-5fecfac339c4"). InnerVolumeSpecName "kube-api-access-s2lng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:16:47.257160 kubelet[2596]: I0413 20:16:47.257149 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5" (OuterVolumeSpecName: "kube-api-access-77gf5") pod "2cc02328-06f0-4a98-b1ae-24ff4a044a72" (UID: "2cc02328-06f0-4a98-b1ae-24ff4a044a72"). InnerVolumeSpecName "kube-api-access-77gf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:16:47.258955 kubelet[2596]: I0413 20:16:47.258934 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652331c7-727f-4c5b-910d-5fecfac339c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "652331c7-727f-4c5b-910d-5fecfac339c4" (UID: "652331c7-727f-4c5b-910d-5fecfac339c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346797 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-cgroup\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346841 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-net\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346862 2596 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hostproc\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346876 2596 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cni-path\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346890 2596 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-lib-modules\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346904 2596 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-bpf-maps\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346917 2596 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-xtables-lock\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347185 kubelet[2596]: I0413 20:16:47.346931 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652331c7-727f-4c5b-910d-5fecfac339c4-cilium-config-path\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.346945 2596 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cc02328-06f0-4a98-b1ae-24ff4a044a72-clustermesh-secrets\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.346959 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s2lng\" (UniqueName: \"kubernetes.io/projected/652331c7-727f-4c5b-910d-5fecfac339c4-kube-api-access-s2lng\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.346974 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-host-proc-sys-kernel\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.346987 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-kube-api-access-77gf5\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.347001 2596 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-etc-cni-netd\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.347015 2596 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cc02328-06f0-4a98-b1ae-24ff4a044a72-hubble-tls\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.347029 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-config-path\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.347747 kubelet[2596]: I0413 20:16:47.347045 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cc02328-06f0-4a98-b1ae-24ff4a044a72-cilium-run\") on node \"ci-4081-3-7-7-b4460b9a5e\" DevicePath \"\"" Apr 13 20:16:47.649215 systemd[1]: Removed slice kubepods-besteffort-pod652331c7_727f_4c5b_910d_5fecfac339c4.slice - libcontainer container kubepods-besteffort-pod652331c7_727f_4c5b_910d_5fecfac339c4.slice. Apr 13 20:16:47.650846 systemd[1]: Removed slice kubepods-burstable-pod2cc02328_06f0_4a98_b1ae_24ff4a044a72.slice - libcontainer container kubepods-burstable-pod2cc02328_06f0_4a98_b1ae_24ff4a044a72.slice. Apr 13 20:16:47.650912 systemd[1]: kubepods-burstable-pod2cc02328_06f0_4a98_b1ae_24ff4a044a72.slice: Consumed 5.534s CPU time. Apr 13 20:16:47.971910 kubelet[2596]: I0413 20:16:47.971882 2596 scope.go:117] "RemoveContainer" containerID="0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9" Apr 13 20:16:47.975743 containerd[1528]: time="2026-04-13T20:16:47.975547243Z" level=info msg="RemoveContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\"" Apr 13 20:16:47.984758 containerd[1528]: time="2026-04-13T20:16:47.984563624Z" level=info msg="RemoveContainer for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" returns successfully" Apr 13 20:16:47.984813 kubelet[2596]: I0413 20:16:47.984706 2596 scope.go:117] "RemoveContainer" containerID="aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5" Apr 13 20:16:47.985960 containerd[1528]: time="2026-04-13T20:16:47.985785052Z" level=info msg="RemoveContainer for \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\"" Apr 13 20:16:47.988964 containerd[1528]: time="2026-04-13T20:16:47.988925696Z" level=info msg="RemoveContainer for \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\" returns successfully" Apr 13 20:16:47.989137 kubelet[2596]: I0413 20:16:47.989024 2596 scope.go:117] "RemoveContainer" containerID="2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9" Apr 13 20:16:47.991625 containerd[1528]: time="2026-04-13T20:16:47.990629032Z" level=info msg="RemoveContainer for \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\"" Apr 13 20:16:47.995982 containerd[1528]: time="2026-04-13T20:16:47.995918852Z" level=info msg="RemoveContainer for \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\" returns successfully" Apr 13 20:16:47.996146 kubelet[2596]: I0413 20:16:47.996090 2596 scope.go:117] "RemoveContainer" containerID="a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8" Apr 13 20:16:47.997246 containerd[1528]: time="2026-04-13T20:16:47.997068839Z" level=info msg="RemoveContainer for \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\"" Apr 13 20:16:48.000788 containerd[1528]: time="2026-04-13T20:16:48.000726442Z" level=info msg="RemoveContainer for \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\" returns successfully" Apr 13 20:16:48.001015 kubelet[2596]: I0413 20:16:48.000909 2596 scope.go:117] "RemoveContainer" containerID="b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e" Apr 13 20:16:48.003302 containerd[1528]: time="2026-04-13T20:16:48.002881288Z" level=info msg="RemoveContainer for \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\"" Apr 13 20:16:48.006309 containerd[1528]: time="2026-04-13T20:16:48.006273761Z" level=info msg="RemoveContainer for \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\" returns successfully" Apr 13 20:16:48.006476 kubelet[2596]: I0413 20:16:48.006409 2596 scope.go:117] "RemoveContainer" containerID="0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9" Apr 13 20:16:48.008313 containerd[1528]: time="2026-04-13T20:16:48.007838808Z" level=error msg="ContainerStatus for \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\": not found" Apr 13 20:16:48.008482 kubelet[2596]: E0413 20:16:48.008090 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\": not found" containerID="0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9" Apr 13 20:16:48.008482 kubelet[2596]: I0413 20:16:48.008138 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9"} err="failed to get container status \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a2ba7026c5365e57a63614985851f62a2d6ede666a0a54dc0a2d4ba135719e9\": not found" Apr 13 20:16:48.008482 kubelet[2596]: I0413 20:16:48.008199 2596 scope.go:117] "RemoveContainer" containerID="aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5" Apr 13 20:16:48.008891 containerd[1528]: time="2026-04-13T20:16:48.008829096Z" level=error msg="ContainerStatus for \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\": not found" Apr 13 20:16:48.009038 kubelet[2596]: E0413 20:16:48.008989 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\": not found" containerID="aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5" Apr 13 20:16:48.009038 kubelet[2596]: I0413 20:16:48.009017 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5"} err="failed to get container status \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa4e9f65038c9202a008e720c860ca24525958a48c43803f3d6895dc68b1fbe5\": not found" Apr 13 20:16:48.009038 kubelet[2596]: I0413 20:16:48.009030 2596 scope.go:117] "RemoveContainer" containerID="2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9" Apr 13 20:16:48.009585 containerd[1528]: time="2026-04-13T20:16:48.009518094Z" level=error msg="ContainerStatus for \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\": not found" Apr 13 20:16:48.009847 kubelet[2596]: E0413 20:16:48.009804 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\": not found" containerID="2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9" Apr 13 20:16:48.009895 kubelet[2596]: I0413 20:16:48.009863 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9"} err="failed to get container status \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2426e3177f8524bd4e10b7c419baa2aa8663c60e085803f608c7b05edf25c8a9\": not found" Apr 13 20:16:48.009915 kubelet[2596]: I0413 20:16:48.009902 2596 scope.go:117] "RemoveContainer" containerID="a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8" Apr 13 20:16:48.011606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5-rootfs.mount: Deactivated successfully. Apr 13 20:16:48.011868 containerd[1528]: time="2026-04-13T20:16:48.010108263Z" level=error msg="ContainerStatus for \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\": not found" Apr 13 20:16:48.011900 kubelet[2596]: E0413 20:16:48.011803 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\": not found" containerID="a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8" Apr 13 20:16:48.011900 kubelet[2596]: I0413 20:16:48.011817 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8"} err="failed to get container status \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\": rpc error: code = NotFound desc = an error occurred when try to find container \"a683f1cf67672371460ce975945efb0fcb668b6b1db01a0e9afdad3cf1d1cba8\": not found" Apr 13 20:16:48.011900 kubelet[2596]: I0413 20:16:48.011827 2596 scope.go:117] "RemoveContainer" containerID="b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e" Apr 13 20:16:48.012136 containerd[1528]: time="2026-04-13T20:16:48.012015029Z" level=error msg="ContainerStatus for \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\": not found" Apr 13 20:16:48.012172 kubelet[2596]: E0413 20:16:48.012084 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\": not found" containerID="b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e" Apr 13 20:16:48.012172 kubelet[2596]: I0413 20:16:48.012095 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e"} err="failed to get container status \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b95a039b04a2ae285c24ab6ce23c572b9db3ff7a46391b788423913ab24ec33e\": not found" Apr 13 20:16:48.012172 kubelet[2596]: I0413 20:16:48.012104 2596 scope.go:117] "RemoveContainer" containerID="9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e" Apr 13 20:16:48.012795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1167982a45b719c46c06fa75243dcb96fb8647500f4e392cef60ca14260e6b5-shm.mount: Deactivated successfully. Apr 13 20:16:48.012968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e0177077c8bec93078b83fdce1922ef6003706c2a97c326418ca1c60c60cc64-rootfs.mount: Deactivated successfully. Apr 13 20:16:48.013095 systemd[1]: var-lib-kubelet-pods-2cc02328\x2d06f0\x2d4a98\x2db1ae\x2d24ff4a044a72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77gf5.mount: Deactivated successfully. Apr 13 20:16:48.013205 systemd[1]: var-lib-kubelet-pods-652331c7\x2d727f\x2d4c5b\x2d910d\x2d5fecfac339c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds2lng.mount: Deactivated successfully. Apr 13 20:16:48.013278 systemd[1]: var-lib-kubelet-pods-2cc02328\x2d06f0\x2d4a98\x2db1ae\x2d24ff4a044a72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 20:16:48.013456 systemd[1]: var-lib-kubelet-pods-2cc02328\x2d06f0\x2d4a98\x2db1ae\x2d24ff4a044a72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 20:16:48.013766 containerd[1528]: time="2026-04-13T20:16:48.013521807Z" level=info msg="RemoveContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\"" Apr 13 20:16:48.017623 containerd[1528]: time="2026-04-13T20:16:48.017580088Z" level=info msg="RemoveContainer for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" returns successfully" Apr 13 20:16:48.017743 kubelet[2596]: I0413 20:16:48.017719 2596 scope.go:117] "RemoveContainer" containerID="9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e" Apr 13 20:16:48.018069 containerd[1528]: time="2026-04-13T20:16:48.018012217Z" level=error msg="ContainerStatus for \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\": not found" Apr 13 20:16:48.018232 kubelet[2596]: E0413 20:16:48.018205 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\": not found" containerID="9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e" Apr 13 20:16:48.018261 kubelet[2596]: I0413 20:16:48.018236 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e"} err="failed to get container status \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9470dbda77934b1b1e8eb9e2fc2e9598f131c7d7d03f2b57fbc08aa37d0d703e\": not found" Apr 13 20:16:48.707044 kubelet[2596]: E0413 20:16:48.706979 2596 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 20:16:48.970935 sshd[4184]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:48.986138 systemd[1]: sshd@22-204.168.245.167:22-20.229.252.112:37162.service: Deactivated successfully. Apr 13 20:16:48.993025 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:16:48.994632 systemd-logind[1506]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:16:48.998008 systemd-logind[1506]: Removed session 23. Apr 13 20:16:49.015775 systemd[1]: Started sshd@23-204.168.245.167:22-20.229.252.112:37178.service - OpenSSH per-connection server daemon (20.229.252.112:37178). Apr 13 20:16:49.236105 sshd[4343]: Accepted publickey for core from 20.229.252.112 port 37178 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:49.237656 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:49.243258 systemd-logind[1506]: New session 24 of user core. Apr 13 20:16:49.252674 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 20:16:49.645921 kubelet[2596]: I0413 20:16:49.645817 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc02328-06f0-4a98-b1ae-24ff4a044a72" path="/var/lib/kubelet/pods/2cc02328-06f0-4a98-b1ae-24ff4a044a72/volumes" Apr 13 20:16:49.647771 kubelet[2596]: I0413 20:16:49.647739 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652331c7-727f-4c5b-910d-5fecfac339c4" path="/var/lib/kubelet/pods/652331c7-727f-4c5b-910d-5fecfac339c4/volumes" Apr 13 20:16:49.830174 systemd[1]: Created slice kubepods-burstable-pod1e47f75f_80f0_4fd7_a74c_b222fd755834.slice - libcontainer container kubepods-burstable-pod1e47f75f_80f0_4fd7_a74c_b222fd755834.slice. Apr 13 20:16:49.832909 sshd[4343]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:49.839527 systemd-logind[1506]: Session 24 logged out. Waiting for processes to exit. Apr 13 20:16:49.840237 systemd[1]: sshd@23-204.168.245.167:22-20.229.252.112:37178.service: Deactivated successfully. Apr 13 20:16:49.844134 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 20:16:49.846822 systemd-logind[1506]: Removed session 24. Apr 13 20:16:49.861807 kubelet[2596]: I0413 20:16:49.861752 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-hostproc\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.861807 kubelet[2596]: I0413 20:16:49.861784 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e47f75f-80f0-4fd7-a74c-b222fd755834-cilium-config-path\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.861807 kubelet[2596]: I0413 20:16:49.861797 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e47f75f-80f0-4fd7-a74c-b222fd755834-cilium-ipsec-secrets\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.861807 kubelet[2596]: I0413 20:16:49.861811 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnqg\" (UniqueName: \"kubernetes.io/projected/1e47f75f-80f0-4fd7-a74c-b222fd755834-kube-api-access-nbnqg\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861825 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-cni-path\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861835 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-host-proc-sys-net\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861844 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-lib-modules\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861855 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-etc-cni-netd\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861864 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-xtables-lock\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863640 kubelet[2596]: I0413 20:16:49.861875 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e47f75f-80f0-4fd7-a74c-b222fd755834-hubble-tls\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863744 kubelet[2596]: I0413 20:16:49.861883 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-bpf-maps\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863744 kubelet[2596]: I0413 20:16:49.861892 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-cilium-cgroup\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863744 kubelet[2596]: I0413 20:16:49.861901 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-cilium-run\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863744 kubelet[2596]: I0413 20:16:49.861912 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e47f75f-80f0-4fd7-a74c-b222fd755834-host-proc-sys-kernel\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.863744 kubelet[2596]: I0413 20:16:49.861925 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e47f75f-80f0-4fd7-a74c-b222fd755834-clustermesh-secrets\") pod \"cilium-hssxs\" (UID: \"1e47f75f-80f0-4fd7-a74c-b222fd755834\") " pod="kube-system/cilium-hssxs" Apr 13 20:16:49.867143 systemd[1]: Started sshd@24-204.168.245.167:22-20.229.252.112:37186.service - OpenSSH per-connection server daemon (20.229.252.112:37186). Apr 13 20:16:50.071943 sshd[4354]: Accepted publickey for core from 20.229.252.112 port 37186 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:50.073842 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:50.077548 systemd-logind[1506]: New session 25 of user core. Apr 13 20:16:50.084513 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 20:16:50.139096 containerd[1528]: time="2026-04-13T20:16:50.138955446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hssxs,Uid:1e47f75f-80f0-4fd7-a74c-b222fd755834,Namespace:kube-system,Attempt:0,}" Apr 13 20:16:50.180861 containerd[1528]: time="2026-04-13T20:16:50.179610159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:16:50.180861 containerd[1528]: time="2026-04-13T20:16:50.179682769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:16:50.180861 containerd[1528]: time="2026-04-13T20:16:50.179718558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:50.181701 containerd[1528]: time="2026-04-13T20:16:50.181300175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:50.217534 systemd[1]: Started cri-containerd-1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db.scope - libcontainer container 1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db. Apr 13 20:16:50.234624 containerd[1528]: time="2026-04-13T20:16:50.234447155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hssxs,Uid:1e47f75f-80f0-4fd7-a74c-b222fd755834,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\"" Apr 13 20:16:50.238888 containerd[1528]: time="2026-04-13T20:16:50.238828916Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:16:50.240576 sshd[4354]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:50.243790 systemd-logind[1506]: Session 25 logged out. Waiting for processes to exit. Apr 13 20:16:50.244399 systemd[1]: sshd@24-204.168.245.167:22-20.229.252.112:37186.service: Deactivated successfully. Apr 13 20:16:50.246545 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 20:16:50.247397 systemd-logind[1506]: Removed session 25. Apr 13 20:16:50.249981 containerd[1528]: time="2026-04-13T20:16:50.249936876Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c\"" Apr 13 20:16:50.250934 containerd[1528]: time="2026-04-13T20:16:50.250378135Z" level=info msg="StartContainer for \"4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c\"" Apr 13 20:16:50.275535 systemd[1]: Started cri-containerd-4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c.scope - libcontainer container 4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c. Apr 13 20:16:50.282499 systemd[1]: Started sshd@25-204.168.245.167:22-20.229.252.112:37200.service - OpenSSH per-connection server daemon (20.229.252.112:37200). Apr 13 20:16:50.301650 containerd[1528]: time="2026-04-13T20:16:50.301523898Z" level=info msg="StartContainer for \"4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c\" returns successfully" Apr 13 20:16:50.309142 systemd[1]: cri-containerd-4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c.scope: Deactivated successfully. Apr 13 20:16:50.333900 containerd[1528]: time="2026-04-13T20:16:50.333488597Z" level=info msg="shim disconnected" id=4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c namespace=k8s.io Apr 13 20:16:50.333900 containerd[1528]: time="2026-04-13T20:16:50.333531287Z" level=warning msg="cleaning up after shim disconnected" id=4fb1c3eb94b1e3e4b7f712c1ed6cf0a98e2c57512164712d38263a1f7fedd61c namespace=k8s.io Apr 13 20:16:50.333900 containerd[1528]: time="2026-04-13T20:16:50.333537847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:50.486767 sshd[4432]: Accepted publickey for core from 20.229.252.112 port 37200 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:16:50.489543 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:50.497527 systemd-logind[1506]: New session 26 of user core. Apr 13 20:16:50.501647 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 13 20:16:50.976040 systemd[1]: run-containerd-runc-k8s.io-1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db-runc.c9uhX9.mount: Deactivated successfully. Apr 13 20:16:51.016100 containerd[1528]: time="2026-04-13T20:16:51.016034283Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:16:51.040807 containerd[1528]: time="2026-04-13T20:16:51.040743187Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe\"" Apr 13 20:16:51.042913 containerd[1528]: time="2026-04-13T20:16:51.042859933Z" level=info msg="StartContainer for \"cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe\"" Apr 13 20:16:51.088534 systemd[1]: Started cri-containerd-cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe.scope - libcontainer container cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe. Apr 13 20:16:51.110453 containerd[1528]: time="2026-04-13T20:16:51.110395908Z" level=info msg="StartContainer for \"cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe\" returns successfully" Apr 13 20:16:51.116551 systemd[1]: cri-containerd-cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe.scope: Deactivated successfully. Apr 13 20:16:51.141349 containerd[1528]: time="2026-04-13T20:16:51.140673153Z" level=info msg="shim disconnected" id=cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe namespace=k8s.io Apr 13 20:16:51.141349 containerd[1528]: time="2026-04-13T20:16:51.140725873Z" level=warning msg="cleaning up after shim disconnected" id=cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe namespace=k8s.io Apr 13 20:16:51.141349 containerd[1528]: time="2026-04-13T20:16:51.140735393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:51.974676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc3dbc71742c4d937e94302b93df5582f1ba73b2e510f1cbe8a9f45e3491fbbe-rootfs.mount: Deactivated successfully. Apr 13 20:16:52.009498 containerd[1528]: time="2026-04-13T20:16:52.009407425Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:16:52.033697 containerd[1528]: time="2026-04-13T20:16:52.033631242Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b\"" Apr 13 20:16:52.034490 containerd[1528]: time="2026-04-13T20:16:52.034380280Z" level=info msg="StartContainer for \"dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b\"" Apr 13 20:16:52.074552 systemd[1]: Started cri-containerd-dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b.scope - libcontainer container dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b. Apr 13 20:16:52.104096 containerd[1528]: time="2026-04-13T20:16:52.104021194Z" level=info msg="StartContainer for \"dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b\" returns successfully" Apr 13 20:16:52.106394 systemd[1]: cri-containerd-dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b.scope: Deactivated successfully. Apr 13 20:16:52.131556 containerd[1528]: time="2026-04-13T20:16:52.131501544Z" level=info msg="shim disconnected" id=dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b namespace=k8s.io Apr 13 20:16:52.131556 containerd[1528]: time="2026-04-13T20:16:52.131543724Z" level=warning msg="cleaning up after shim disconnected" id=dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b namespace=k8s.io Apr 13 20:16:52.131556 containerd[1528]: time="2026-04-13T20:16:52.131550664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:52.974106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee37eb43534ebd6c3382a5605699dd761dd008533868fa1a826d1b9b038b07b-rootfs.mount: Deactivated successfully. Apr 13 20:16:53.021524 containerd[1528]: time="2026-04-13T20:16:53.021442557Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:16:53.048064 containerd[1528]: time="2026-04-13T20:16:53.047988671Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673\"" Apr 13 20:16:53.049095 containerd[1528]: time="2026-04-13T20:16:53.049039238Z" level=info msg="StartContainer for \"eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673\"" Apr 13 20:16:53.090568 systemd[1]: Started cri-containerd-eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673.scope - libcontainer container eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673. Apr 13 20:16:53.113539 systemd[1]: cri-containerd-eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673.scope: Deactivated successfully. Apr 13 20:16:53.114883 containerd[1528]: time="2026-04-13T20:16:53.114656312Z" level=info msg="StartContainer for \"eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673\" returns successfully" Apr 13 20:16:53.143886 containerd[1528]: time="2026-04-13T20:16:53.143827541Z" level=info msg="shim disconnected" id=eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673 namespace=k8s.io Apr 13 20:16:53.143886 containerd[1528]: time="2026-04-13T20:16:53.143875690Z" level=warning msg="cleaning up after shim disconnected" id=eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673 namespace=k8s.io Apr 13 20:16:53.143886 containerd[1528]: time="2026-04-13T20:16:53.143882900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:16:53.708548 kubelet[2596]: E0413 20:16:53.708473 2596 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 20:16:53.976231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb5d9dad6fe1684040b78f830a014fa425e81369994a5faa4763aa566816a673-rootfs.mount: Deactivated successfully. Apr 13 20:16:54.030163 containerd[1528]: time="2026-04-13T20:16:54.029902858Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:16:54.073647 containerd[1528]: time="2026-04-13T20:16:54.073593232Z" level=info msg="CreateContainer within sandbox \"1dae6347a99e5c67ff47dfe9fbd1a5c42f25edc3a4e919fab28bc59fc86324db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba\"" Apr 13 20:16:54.074769 containerd[1528]: time="2026-04-13T20:16:54.074597330Z" level=info msg="StartContainer for \"25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba\"" Apr 13 20:16:54.103545 systemd[1]: Started cri-containerd-25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba.scope - libcontainer container 25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba. Apr 13 20:16:54.131376 containerd[1528]: time="2026-04-13T20:16:54.131335172Z" level=info msg="StartContainer for \"25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba\" returns successfully" Apr 13 20:16:54.504449 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 13 20:16:55.035012 kubelet[2596]: I0413 20:16:55.034897 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hssxs" podStartSLOduration=6.034883715 podStartE2EDuration="6.034883715s" podCreationTimestamp="2026-04-13 20:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:16:55.034287026 +0000 UTC m=+131.495756441" watchObservedRunningTime="2026-04-13 20:16:55.034883715 +0000 UTC m=+131.496353130" Apr 13 20:16:56.829667 systemd[1]: run-containerd-runc-k8s.io-25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba-runc.C8ulrE.mount: Deactivated successfully. Apr 13 20:16:57.013282 kubelet[2596]: I0413 20:16:57.013230 2596 setters.go:543] "Node became not ready" node="ci-4081-3-7-7-b4460b9a5e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T20:16:57Z","lastTransitionTime":"2026-04-13T20:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 20:16:57.190736 systemd-networkd[1409]: lxc_health: Link UP Apr 13 20:16:57.201001 systemd-networkd[1409]: lxc_health: Gained carrier Apr 13 20:16:58.218633 systemd-networkd[1409]: lxc_health: Gained IPv6LL Apr 13 20:16:58.964763 systemd[1]: run-containerd-runc-k8s.io-25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba-runc.owb05S.mount: Deactivated successfully. Apr 13 20:17:03.227249 systemd[1]: run-containerd-runc-k8s.io-25b844dc521239e336f64c3626bee4541eb6689aca0f84cce2f332347e5f4bba-runc.dMOI0a.mount: Deactivated successfully. Apr 13 20:17:03.312467 sshd[4432]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:03.317950 systemd[1]: sshd@25-204.168.245.167:22-20.229.252.112:37200.service: Deactivated successfully. Apr 13 20:17:03.321216 systemd[1]: session-26.scope: Deactivated successfully. Apr 13 20:17:03.322149 systemd-logind[1506]: Session 26 logged out. Waiting for processes to exit. Apr 13 20:17:03.323511 systemd-logind[1506]: Removed session 26.