Jan 24 00:39:05.161846 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:39:05.161863 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:39:05.161872 kernel: BIOS-provided physical RAM map: Jan 24 00:39:05.161877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:39:05.161881 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 24 00:39:05.161885 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 24 00:39:05.161890 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 24 00:39:05.161895 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 24 00:39:05.161899 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 24 00:39:05.161904 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 24 00:39:05.161908 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:39:05.161915 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:39:05.161919 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 24 00:39:05.161924 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 24 00:39:05.161929 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:39:05.161934 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:39:05.161941 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:39:05.161945 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 24 00:39:05.161950 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:39:05.161955 kernel: NX (Execute Disable) protection: active Jan 24 00:39:05.161959 kernel: APIC: Static calls initialized Jan 24 00:39:05.161964 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 24 00:39:05.161969 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Jan 24 00:39:05.161973 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:39:05.161978 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:39:05.161983 kernel: SMBIOS 3.0.0 present. Jan 24 00:39:05.161988 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 24 00:39:05.161992 kernel: Hypervisor detected: KVM Jan 24 00:39:05.161999 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:39:05.162004 kernel: kvm-clock: using sched offset of 12384341060 cycles Jan 24 00:39:05.162009 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:39:05.162014 kernel: tsc: Detected 2399.998 MHz processor Jan 24 00:39:05.162019 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:39:05.162024 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:39:05.162028 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 24 00:39:05.162033 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:39:05.162038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:39:05.162045 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 24 00:39:05.162050 kernel: Using GB pages for direct mapping Jan 24 00:39:05.162054 kernel: Secure boot disabled Jan 24 00:39:05.162062 kernel: ACPI: Early table checksum verification disabled Jan 24 00:39:05.162067 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:39:05.162072 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:39:05.162077 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162085 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162090 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:39:05.162095 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162100 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162105 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162110 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:39:05.162116 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:39:05.162123 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 24 00:39:05.162128 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 24 00:39:05.162133 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:39:05.162138 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 24 00:39:05.162143 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 24 00:39:05.162148 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 24 00:39:05.162153 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 24 00:39:05.162158 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 24 00:39:05.162163 kernel: No NUMA configuration found Jan 24 00:39:05.162170 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 24 00:39:05.162175 kernel: NODE_DATA(0) allocated [mem 0x179ff8000-0x179ffdfff] Jan 24 00:39:05.162180 kernel: Zone ranges: Jan 24 00:39:05.162185 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:39:05.162190 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:39:05.162196 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:39:05.162201 kernel: Movable zone start for each node Jan 24 00:39:05.162206 kernel: Early memory node ranges Jan 24 00:39:05.162211 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:39:05.162216 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 24 00:39:05.162223 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 24 00:39:05.162228 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 24 00:39:05.162233 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:39:05.162238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 24 00:39:05.162243 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:39:05.162248 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:39:05.162253 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:39:05.162258 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:39:05.162263 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 24 00:39:05.162270 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 24 00:39:05.162275 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:39:05.162280 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:39:05.162285 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:39:05.162290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:39:05.162295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:39:05.162300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:39:05.162305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:39:05.162310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:39:05.162318 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:39:05.162347 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:39:05.162352 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:39:05.162357 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:39:05.162362 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:39:05.162367 kernel: Booting paravirtualized kernel on KVM Jan 24 00:39:05.162372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:39:05.162377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:39:05.162382 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:39:05.162390 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:39:05.162395 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:39:05.162400 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 24 00:39:05.162406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:39:05.162411 kernel: random: crng init done Jan 24 00:39:05.162416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:39:05.162421 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:39:05.162426 kernel: Fallback order for Node 0: 0 Jan 24 00:39:05.162431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 24 00:39:05.162439 kernel: Policy zone: Normal Jan 24 00:39:05.162444 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:39:05.162449 kernel: software IO TLB: area num 2. Jan 24 00:39:05.162454 kernel: Memory: 3819388K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 271576K reserved, 0K cma-reserved) Jan 24 00:39:05.162459 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:39:05.162464 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:39:05.162469 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:39:05.162474 kernel: Dynamic Preempt: voluntary Jan 24 00:39:05.162479 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:39:05.162487 kernel: rcu: RCU event tracing is enabled. Jan 24 00:39:05.162492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:39:05.162498 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:39:05.162509 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:39:05.162517 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:39:05.162522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:39:05.162527 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:39:05.162533 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:39:05.162538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:39:05.162543 kernel: Console: colour dummy device 80x25 Jan 24 00:39:05.162548 kernel: printk: console [tty0] enabled Jan 24 00:39:05.162554 kernel: printk: console [ttyS0] enabled Jan 24 00:39:05.162561 kernel: ACPI: Core revision 20230628 Jan 24 00:39:05.162566 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:39:05.162572 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:39:05.162577 kernel: x2apic enabled Jan 24 00:39:05.162582 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:39:05.162590 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:39:05.162595 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:39:05.162600 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 24 00:39:05.162605 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:39:05.162611 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:39:05.162616 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:39:05.162621 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:39:05.162626 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:39:05.162634 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:39:05.162639 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:39:05.162644 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:39:05.162649 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 24 00:39:05.162655 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:39:05.162660 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:39:05.162665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:39:05.162670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:39:05.162675 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:39:05.162683 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:39:05.162688 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:39:05.162693 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:39:05.162699 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:39:05.162704 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:39:05.162709 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:39:05.162714 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:39:05.162719 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:39:05.162724 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:39:05.162732 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:39:05.162737 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:39:05.162742 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:39:05.162748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:39:05.162753 kernel: landlock: Up and running. Jan 24 00:39:05.162758 kernel: SELinux: Initializing. Jan 24 00:39:05.162763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:39:05.162768 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:39:05.162774 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 24 00:39:05.162790 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:39:05.162795 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:39:05.162800 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:39:05.162805 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:39:05.162810 kernel: ... version: 0 Jan 24 00:39:05.162816 kernel: ... bit width: 48 Jan 24 00:39:05.162821 kernel: ... generic registers: 6 Jan 24 00:39:05.162826 kernel: ... value mask: 0000ffffffffffff Jan 24 00:39:05.162831 kernel: ... max period: 00007fffffffffff Jan 24 00:39:05.162839 kernel: ... fixed-purpose events: 0 Jan 24 00:39:05.162844 kernel: ... event mask: 000000000000003f Jan 24 00:39:05.162849 kernel: signal: max sigframe size: 3376 Jan 24 00:39:05.162854 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:39:05.162860 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:39:05.162865 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:39:05.162870 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:39:05.162876 kernel: .... node #0, CPUs: #1 Jan 24 00:39:05.162881 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:39:05.162888 kernel: smpboot: Max logical packages: 1 Jan 24 00:39:05.162893 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 24 00:39:05.162899 kernel: devtmpfs: initialized Jan 24 00:39:05.162904 kernel: x86/mm: Memory block size: 128MB Jan 24 00:39:05.162909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:39:05.162914 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:39:05.162919 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:39:05.162924 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:39:05.162930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:39:05.162937 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:39:05.162943 kernel: audit: type=2000 audit(1769215143.520:1): state=initialized audit_enabled=0 res=1 Jan 24 00:39:05.162948 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:39:05.162953 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:39:05.162958 kernel: cpuidle: using governor menu Jan 24 00:39:05.162963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:39:05.162968 kernel: dca service started, version 1.12.1 Jan 24 00:39:05.162974 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 24 00:39:05.162979 kernel: PCI: Using configuration type 1 for base access Jan 24 00:39:05.162986 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:39:05.162992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:39:05.162997 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:39:05.163002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:39:05.163007 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:39:05.163012 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:39:05.163017 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:39:05.163022 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:39:05.163028 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:39:05.163035 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:39:05.163040 kernel: ACPI: Interpreter enabled Jan 24 00:39:05.163045 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:39:05.163051 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:39:05.163059 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:39:05.163064 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:39:05.163070 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:39:05.163075 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:39:05.163229 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:39:05.163350 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:39:05.163461 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:39:05.163468 kernel: PCI host bridge to bus 0000:00 Jan 24 00:39:05.163569 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:39:05.163657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:39:05.163745 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:39:05.163847 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:39:05.163934 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:39:05.164020 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:39:05.164107 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:39:05.164217 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:39:05.164337 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 24 00:39:05.164435 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 24 00:39:05.164535 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 24 00:39:05.164630 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 24 00:39:05.164727 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:39:05.164833 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:39:05.164928 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:39:05.165031 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.165127 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 24 00:39:05.165233 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.165348 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 24 00:39:05.165449 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.165545 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 24 00:39:05.165647 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.165745 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 24 00:39:05.165856 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.165952 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 24 00:39:05.166054 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.166149 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 24 00:39:05.166251 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.166356 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 24 00:39:05.166479 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.166575 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 24 00:39:05.166676 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:39:05.166773 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 24 00:39:05.166884 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:39:05.166979 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:39:05.167084 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:39:05.167178 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 24 00:39:05.167273 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 24 00:39:05.167390 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:39:05.167485 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 24 00:39:05.167592 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:39:05.167696 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 24 00:39:05.167803 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 24 00:39:05.167902 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:39:05.167998 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:39:05.168093 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:39:05.168188 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:39:05.168295 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 00:39:05.168409 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 24 00:39:05.168505 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:39:05.168601 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:39:05.168707 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 24 00:39:05.168815 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 24 00:39:05.168914 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 24 00:39:05.169010 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:39:05.169108 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:39:05.169203 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:39:05.169310 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 24 00:39:05.169435 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 24 00:39:05.169532 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:39:05.169627 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:39:05.169736 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 00:39:05.169849 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 24 00:39:05.169948 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 24 00:39:05.170044 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:39:05.170139 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:39:05.170234 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:39:05.170413 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 24 00:39:05.170516 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 24 00:39:05.170618 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 24 00:39:05.170714 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:39:05.170818 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:39:05.170913 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:39:05.170920 kernel: acpiphp: Slot [0] registered Jan 24 00:39:05.171027 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:39:05.171127 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 24 00:39:05.171226 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 24 00:39:05.171338 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:39:05.171447 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:39:05.171543 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:39:05.171638 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:39:05.171645 kernel: acpiphp: Slot [0-2] registered Jan 24 00:39:05.171740 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:39:05.171843 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:39:05.171937 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:39:05.171946 kernel: acpiphp: Slot [0-3] registered Jan 24 00:39:05.172042 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:39:05.172136 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:39:05.172229 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:39:05.172236 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:39:05.172241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:39:05.172247 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:39:05.172252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:39:05.172260 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:39:05.172265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:39:05.172271 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:39:05.172276 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:39:05.172282 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:39:05.172287 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:39:05.172292 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:39:05.172297 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:39:05.172303 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:39:05.172311 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:39:05.172316 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:39:05.172397 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:39:05.172403 kernel: iommu: Default domain type: Translated Jan 24 00:39:05.172408 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:39:05.172413 kernel: efivars: Registered efivars operations Jan 24 00:39:05.172419 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:39:05.172424 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:39:05.172430 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 24 00:39:05.172438 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:39:05.172443 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 24 00:39:05.172449 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 24 00:39:05.172549 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:39:05.172646 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:39:05.172740 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:39:05.172747 kernel: vgaarb: loaded Jan 24 00:39:05.172753 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:39:05.172758 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:39:05.172766 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:39:05.172772 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:39:05.172785 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:39:05.172790 kernel: pnp: PnP ACPI init Jan 24 00:39:05.172895 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:39:05.172902 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:39:05.172908 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:39:05.172913 kernel: NET: Registered PF_INET protocol family Jan 24 00:39:05.172934 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:39:05.172942 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:39:05.172948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:39:05.172954 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:39:05.172959 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:39:05.172965 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:39:05.172970 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:39:05.172976 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:39:05.172981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:39:05.172989 kernel: NET: Registered PF_XDP protocol family Jan 24 00:39:05.173092 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:39:05.173192 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:39:05.173288 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:39:05.175429 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:39:05.175541 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:39:05.175640 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 00:39:05.175740 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 00:39:05.175847 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 00:39:05.175949 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 24 00:39:05.176045 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:39:05.176144 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:39:05.176239 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:39:05.176347 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:39:05.176444 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:39:05.176542 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:39:05.176638 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:39:05.176734 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:39:05.176839 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:39:05.176934 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:39:05.177035 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:39:05.177130 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:39:05.177224 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:39:05.179342 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:39:05.179470 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:39:05.179570 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:39:05.179675 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 24 00:39:05.179772 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:39:05.179884 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 24 00:39:05.179979 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:39:05.180075 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:39:05.180171 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:39:05.180266 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 24 00:39:05.180404 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:39:05.180500 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:39:05.180595 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:39:05.180693 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 24 00:39:05.180798 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:39:05.180892 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:39:05.180988 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:39:05.181077 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:39:05.181170 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:39:05.181259 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:39:05.183524 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:39:05.183622 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:39:05.183724 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 24 00:39:05.183826 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:39:05.183924 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 24 00:39:05.184028 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 24 00:39:05.184119 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:39:05.184221 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:39:05.184328 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 24 00:39:05.184422 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:39:05.184520 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 24 00:39:05.184615 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:39:05.184717 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 24 00:39:05.184817 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:39:05.184909 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:39:05.185006 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 24 00:39:05.185109 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:39:05.185201 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:39:05.185302 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 24 00:39:05.188192 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:39:05.188294 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:39:05.188302 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:39:05.188308 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:39:05.188314 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:39:05.188333 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 24 00:39:05.188339 kernel: Initialise system trusted keyrings Jan 24 00:39:05.188350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:39:05.188355 kernel: Key type asymmetric registered Jan 24 00:39:05.188361 kernel: Asymmetric key parser 'x509' registered Jan 24 00:39:05.188366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:39:05.188371 kernel: io scheduler mq-deadline registered Jan 24 00:39:05.188377 kernel: io scheduler kyber registered Jan 24 00:39:05.188383 kernel: io scheduler bfq registered Jan 24 00:39:05.188488 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:39:05.188589 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:39:05.188689 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:39:05.188794 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:39:05.188893 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:39:05.188990 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:39:05.189091 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:39:05.189188 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:39:05.189285 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:39:05.189391 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:39:05.189492 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:39:05.189586 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:39:05.189682 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:39:05.189788 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:39:05.189884 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:39:05.189980 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:39:05.189987 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:39:05.190083 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:39:05.190182 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:39:05.190188 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:39:05.190194 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:39:05.190200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:39:05.190206 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:39:05.190211 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:39:05.190217 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:39:05.190223 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:39:05.190229 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:39:05.190361 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:39:05.190456 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:39:05.190547 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:39:04 UTC (1769215144) Jan 24 00:39:05.190639 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:39:05.190645 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:39:05.190655 kernel: efifb: probing for efifb Jan 24 00:39:05.190661 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 24 00:39:05.190666 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:39:05.190674 kernel: efifb: scrolling: redraw Jan 24 00:39:05.190680 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:39:05.190685 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:39:05.190691 kernel: fb0: EFI VGA frame buffer device Jan 24 00:39:05.190696 kernel: pstore: Using crash dump compression: deflate Jan 24 00:39:05.190702 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:39:05.190708 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:39:05.190713 kernel: Segment Routing with IPv6 Jan 24 00:39:05.190719 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:39:05.190727 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:39:05.190732 kernel: Key type dns_resolver registered Jan 24 00:39:05.190738 kernel: IPI shorthand broadcast: enabled Jan 24 00:39:05.190744 kernel: sched_clock: Marking stable (1359011051, 188969381)->(1573913396, -25932964) Jan 24 00:39:05.190749 kernel: registered taskstats version 1 Jan 24 00:39:05.190755 kernel: Loading compiled-in X.509 certificates Jan 24 00:39:05.190761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:39:05.190766 kernel: Key type .fscrypt registered Jan 24 00:39:05.190771 kernel: Key type fscrypt-provisioning registered Jan 24 00:39:05.190787 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:39:05.190793 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:39:05.190798 kernel: ima: No architecture policies found Jan 24 00:39:05.190804 kernel: clk: Disabling unused clocks Jan 24 00:39:05.190809 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:39:05.190815 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:39:05.190821 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:39:05.190826 kernel: Run /init as init process Jan 24 00:39:05.190832 kernel: with arguments: Jan 24 00:39:05.190841 kernel: /init Jan 24 00:39:05.190846 kernel: with environment: Jan 24 00:39:05.190852 kernel: HOME=/ Jan 24 00:39:05.190858 kernel: TERM=linux Jan 24 00:39:05.190866 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:39:05.190874 systemd[1]: Detected virtualization kvm. Jan 24 00:39:05.190880 systemd[1]: Detected architecture x86-64. Jan 24 00:39:05.190888 systemd[1]: Running in initrd. Jan 24 00:39:05.190894 systemd[1]: No hostname configured, using default hostname. Jan 24 00:39:05.190900 systemd[1]: Hostname set to . Jan 24 00:39:05.190906 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:39:05.190912 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:39:05.190918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:39:05.190924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:39:05.190930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:39:05.190939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:39:05.190945 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:39:05.190951 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:39:05.190958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:39:05.190964 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:39:05.190970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:39:05.190976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:39:05.190984 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:39:05.190990 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:39:05.190995 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:39:05.191001 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:39:05.191007 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:39:05.191013 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:39:05.191019 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:39:05.191027 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:39:05.191035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:39:05.191041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:39:05.191047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:39:05.191053 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:39:05.191059 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:39:05.191065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:39:05.191071 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:39:05.191076 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:39:05.191082 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:39:05.191090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:39:05.191117 systemd-journald[187]: Collecting audit messages is disabled. Jan 24 00:39:05.191131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:05.191137 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:39:05.191146 systemd-journald[187]: Journal started Jan 24 00:39:05.191160 systemd-journald[187]: Runtime Journal (/run/log/journal/7b8275398aff442dbc9e4de0d8d60ed9) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:39:05.194878 systemd-modules-load[188]: Inserted module 'overlay' Jan 24 00:39:05.203378 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:39:05.205752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:39:05.208598 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:39:05.217722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:39:05.221338 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:39:05.225368 kernel: Bridge firewalling registered Jan 24 00:39:05.224924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:39:05.226164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:05.226353 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 24 00:39:05.228211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:39:05.238478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:39:05.240435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:39:05.241026 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:39:05.248602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:39:05.251365 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:39:05.259133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:39:05.266486 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:39:05.267091 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:39:05.268654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:39:05.272451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:39:05.274933 dracut-cmdline[221]: dracut-dracut-053 Jan 24 00:39:05.278545 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:39:05.299287 systemd-resolved[228]: Positive Trust Anchors: Jan 24 00:39:05.299302 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:39:05.299348 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:39:05.303385 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 24 00:39:05.304385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:39:05.305414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:39:05.338357 kernel: SCSI subsystem initialized Jan 24 00:39:05.346342 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:39:05.362348 kernel: iscsi: registered transport (tcp) Jan 24 00:39:05.380577 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:39:05.380683 kernel: QLogic iSCSI HBA Driver Jan 24 00:39:05.435048 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:39:05.440475 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:39:05.492495 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:39:05.492586 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:39:05.495383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:39:05.560395 kernel: raid6: avx512x4 gen() 19501 MB/s Jan 24 00:39:05.579377 kernel: raid6: avx512x2 gen() 21445 MB/s Jan 24 00:39:05.598397 kernel: raid6: avx512x1 gen() 23359 MB/s Jan 24 00:39:05.616369 kernel: raid6: avx2x4 gen() 46146 MB/s Jan 24 00:39:05.634368 kernel: raid6: avx2x2 gen() 55561 MB/s Jan 24 00:39:05.653132 kernel: raid6: avx2x1 gen() 43633 MB/s Jan 24 00:39:05.653207 kernel: raid6: using algorithm avx2x2 gen() 55561 MB/s Jan 24 00:39:05.672351 kernel: raid6: .... xor() 37119 MB/s, rmw enabled Jan 24 00:39:05.672380 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:39:05.689380 kernel: xor: automatically using best checksumming function avx Jan 24 00:39:05.847385 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:39:05.865426 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:39:05.873671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:39:05.886115 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jan 24 00:39:05.890283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:39:05.899599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:39:05.918144 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 24 00:39:05.958286 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:39:05.966537 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:39:06.092775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:39:06.102644 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:39:06.122534 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:39:06.130559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:39:06.132948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:39:06.133906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:39:06.144715 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:39:06.174004 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:39:06.209394 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:39:06.221363 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:39:06.239357 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:39:06.239948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:39:06.240770 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:39:06.241289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:39:06.243386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:39:06.243479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:06.243833 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:06.260578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:06.267978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:39:06.268562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:06.278353 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:39:06.278297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:06.288344 kernel: libata version 3.00 loaded. Jan 24 00:39:06.293341 kernel: AES CTR mode by8 optimization enabled Jan 24 00:39:06.302542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:06.312427 kernel: ACPI: bus type USB registered Jan 24 00:39:06.312656 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:39:06.316344 kernel: usbcore: registered new interface driver usbfs Jan 24 00:39:06.318350 kernel: usbcore: registered new interface driver hub Jan 24 00:39:06.318391 kernel: usbcore: registered new device driver usb Jan 24 00:39:06.348443 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:39:06.353545 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 24 00:39:06.353773 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 00:39:06.355350 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:39:06.355569 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:39:06.359942 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 24 00:39:06.360151 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:39:06.363349 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 24 00:39:06.363847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:39:06.368468 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:39:06.368689 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:39:06.368818 kernel: hub 1-0:1.0: USB hub found Jan 24 00:39:06.371059 kernel: hub 1-0:1.0: 4 ports detected Jan 24 00:39:06.374339 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 00:39:06.377337 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:39:06.381478 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 24 00:39:06.381657 kernel: hub 2-0:1.0: USB hub found Jan 24 00:39:06.381800 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:39:06.381925 kernel: hub 2-0:1.0: 4 ports detected Jan 24 00:39:06.382040 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:39:06.384251 kernel: scsi host1: ahci Jan 24 00:39:06.384467 kernel: scsi host2: ahci Jan 24 00:39:06.386343 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:39:06.386522 kernel: scsi host3: ahci Jan 24 00:39:06.391343 kernel: scsi host4: ahci Jan 24 00:39:06.391521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:39:06.391531 kernel: scsi host5: ahci Jan 24 00:39:06.391652 kernel: GPT:17805311 != 160006143 Jan 24 00:39:06.393350 kernel: scsi host6: ahci Jan 24 00:39:06.393386 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:39:06.398362 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 Jan 24 00:39:06.398413 kernel: GPT:17805311 != 160006143 Jan 24 00:39:06.398423 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 Jan 24 00:39:06.398431 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:39:06.398438 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 Jan 24 00:39:06.398446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:39:06.398464 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 Jan 24 00:39:06.398472 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:39:06.398674 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 Jan 24 00:39:06.414393 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 Jan 24 00:39:06.439351 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (463) Jan 24 00:39:06.446946 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:39:06.451343 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (452) Jan 24 00:39:06.452710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:39:06.457033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:39:06.462214 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:39:06.463067 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:39:06.468738 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:39:06.476433 disk-uuid[582]: Primary Header is updated. Jan 24 00:39:06.476433 disk-uuid[582]: Secondary Entries is updated. Jan 24 00:39:06.476433 disk-uuid[582]: Secondary Header is updated. Jan 24 00:39:06.484351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:39:06.490352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:39:06.617346 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 00:39:06.731359 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:39:06.731456 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:39:06.737428 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:39:06.737534 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:39:06.743349 kernel: ata1.00: applying bridge limits Jan 24 00:39:06.751654 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:39:06.751722 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:39:06.756379 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:39:06.764378 kernel: ata1.00: configured for UDMA/100 Jan 24 00:39:06.771416 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:39:06.782377 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:39:06.803169 kernel: usbcore: registered new interface driver usbhid Jan 24 00:39:06.803273 kernel: usbhid: USB HID core driver Jan 24 00:39:06.821938 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 00:39:06.822016 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 24 00:39:06.837041 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:39:06.837674 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:39:06.852574 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:39:07.494404 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:39:07.496692 disk-uuid[583]: The operation has completed successfully. Jan 24 00:39:07.563007 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:39:07.563106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:39:07.573452 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:39:07.581936 sh[603]: Success Jan 24 00:39:07.600416 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:39:07.670066 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:39:07.671398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:39:07.682969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:39:07.700429 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:39:07.700502 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:39:07.705457 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:39:07.711398 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:39:07.715954 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:39:07.731360 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:39:07.734659 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:39:07.736593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:39:07.741595 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:39:07.746633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:39:07.772419 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:39:07.781619 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:39:07.781675 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:39:07.794855 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:39:07.794917 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:39:07.825853 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:39:07.825413 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:39:07.837581 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:39:07.843647 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:39:07.951241 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:39:07.957577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:39:07.958440 ignition[709]: Ignition 2.19.0 Jan 24 00:39:07.958452 ignition[709]: Stage: fetch-offline Jan 24 00:39:07.958520 ignition[709]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:07.958530 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:07.958694 ignition[709]: parsed url from cmdline: "" Jan 24 00:39:07.958698 ignition[709]: no config URL provided Jan 24 00:39:07.958711 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:39:07.958719 ignition[709]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:39:07.958725 ignition[709]: failed to fetch config: resource requires networking Jan 24 00:39:07.959074 ignition[709]: Ignition finished successfully Jan 24 00:39:07.966653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:39:07.984548 systemd-networkd[788]: lo: Link UP Jan 24 00:39:07.984559 systemd-networkd[788]: lo: Gained carrier Jan 24 00:39:07.987437 systemd-networkd[788]: Enumeration completed Jan 24 00:39:07.987632 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:39:07.988184 systemd[1]: Reached target network.target - Network. Jan 24 00:39:07.988610 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:07.988615 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:39:07.989498 systemd-networkd[788]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:07.989502 systemd-networkd[788]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:39:07.990066 systemd-networkd[788]: eth0: Link UP Jan 24 00:39:07.990070 systemd-networkd[788]: eth0: Gained carrier Jan 24 00:39:07.990077 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:07.994549 systemd-networkd[788]: eth1: Link UP Jan 24 00:39:07.994896 systemd-networkd[788]: eth1: Gained carrier Jan 24 00:39:07.994903 systemd-networkd[788]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:07.995490 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:39:08.007712 ignition[791]: Ignition 2.19.0 Jan 24 00:39:08.007726 ignition[791]: Stage: fetch Jan 24 00:39:08.007896 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:08.007908 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:08.008006 ignition[791]: parsed url from cmdline: "" Jan 24 00:39:08.008010 ignition[791]: no config URL provided Jan 24 00:39:08.008016 ignition[791]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:39:08.008026 ignition[791]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:39:08.008043 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 24 00:39:08.008185 ignition[791]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:39:08.032408 systemd-networkd[788]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:39:08.053404 systemd-networkd[788]: eth0: DHCPv4 address 157.180.47.226/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:39:08.208457 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 24 00:39:08.212816 ignition[791]: GET result: OK Jan 24 00:39:08.212893 ignition[791]: parsing config with SHA512: f86b67a510157bb19f68b822d00d950dc1038b5a723dffd3fc1c9f1f58120c6b1d63973d3707c6eb1caa16ae93132235b9fb24ac6fd450f918d299c87c5d58a0 Jan 24 00:39:08.216109 unknown[791]: fetched base config from "system" Jan 24 00:39:08.216596 ignition[791]: fetch: fetch complete Jan 24 00:39:08.216120 unknown[791]: fetched base config from "system" Jan 24 00:39:08.216603 ignition[791]: fetch: fetch passed Jan 24 00:39:08.216127 unknown[791]: fetched user config from "hetzner" Jan 24 00:39:08.216655 ignition[791]: Ignition finished successfully Jan 24 00:39:08.221619 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:39:08.228666 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:39:08.248407 ignition[798]: Ignition 2.19.0 Jan 24 00:39:08.248421 ignition[798]: Stage: kargs Jan 24 00:39:08.248697 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:08.253088 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:39:08.248712 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:08.249762 ignition[798]: kargs: kargs passed Jan 24 00:39:08.249845 ignition[798]: Ignition finished successfully Jan 24 00:39:08.261763 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:39:08.279255 ignition[804]: Ignition 2.19.0 Jan 24 00:39:08.279266 ignition[804]: Stage: disks Jan 24 00:39:08.283085 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:39:08.279426 ignition[804]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:08.283725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:39:08.279436 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:08.284707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:39:08.279967 ignition[804]: disks: disks passed Jan 24 00:39:08.285825 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:39:08.280006 ignition[804]: Ignition finished successfully Jan 24 00:39:08.286995 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:39:08.288278 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:39:08.294532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:39:08.315074 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:39:08.320135 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:39:08.326646 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:39:08.445365 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:39:08.445546 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:39:08.446477 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:39:08.452435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:39:08.459547 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:39:08.465544 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:39:08.466697 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:39:08.466731 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:39:08.469729 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:39:08.476347 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (820) Jan 24 00:39:08.476845 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:39:08.480356 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:39:08.485158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:39:08.485207 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:39:08.503361 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:39:08.504185 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:39:08.514404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:39:08.544824 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:39:08.552202 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:39:08.560347 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:39:08.565748 coreos-metadata[822]: Jan 24 00:39:08.565 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 24 00:39:08.567244 coreos-metadata[822]: Jan 24 00:39:08.566 INFO Fetch successful Jan 24 00:39:08.567244 coreos-metadata[822]: Jan 24 00:39:08.566 INFO wrote hostname ci-4081-3-6-n-a6966cf543 to /sysroot/etc/hostname Jan 24 00:39:08.567999 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:39:08.570832 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:39:08.675721 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:39:08.681421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:39:08.683484 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:39:08.698136 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:39:08.703715 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:39:08.717433 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:39:08.733062 ignition[937]: INFO : Ignition 2.19.0 Jan 24 00:39:08.733062 ignition[937]: INFO : Stage: mount Jan 24 00:39:08.734102 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:08.734102 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:08.735457 ignition[937]: INFO : mount: mount passed Jan 24 00:39:08.735457 ignition[937]: INFO : Ignition finished successfully Jan 24 00:39:08.736653 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:39:08.744492 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:39:08.752598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:39:08.769256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 24 00:39:08.769341 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:39:08.769360 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:39:08.771564 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:39:08.779503 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:39:08.779568 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:39:08.784964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:39:08.813886 ignition[965]: INFO : Ignition 2.19.0 Jan 24 00:39:08.813886 ignition[965]: INFO : Stage: files Jan 24 00:39:08.814833 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:08.814833 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:08.815480 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:39:08.816588 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:39:08.817012 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:39:08.820866 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:39:08.821212 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:39:08.821611 unknown[965]: wrote ssh authorized keys file for user: core Jan 24 00:39:08.822148 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:39:08.824136 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:39:08.824748 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:39:09.144158 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:39:09.172971 systemd-networkd[788]: eth1: Gained IPv6LL Jan 24 00:39:09.173615 systemd-networkd[788]: eth0: Gained IPv6LL Jan 24 00:39:09.450110 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:39:09.450110 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:39:09.452817 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 00:39:09.563485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:39:09.658887 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:39:09.659533 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:39:09.659533 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:39:09.659533 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:39:09.659533 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:39:09.661557 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:39:09.818563 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:39:10.146004 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:39:10.146004 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:39:10.148905 ignition[965]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:39:10.148905 ignition[965]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:39:10.148905 ignition[965]: INFO : files: files passed Jan 24 00:39:10.148905 ignition[965]: INFO : Ignition finished successfully Jan 24 00:39:10.152900 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:39:10.164752 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:39:10.170544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:39:10.174571 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:39:10.175590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:39:10.206164 initrd-setup-root-after-ignition[998]: grep: Jan 24 00:39:10.208234 initrd-setup-root-after-ignition[994]: grep: Jan 24 00:39:10.208234 initrd-setup-root-after-ignition[998]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:39:10.210232 initrd-setup-root-after-ignition[994]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:39:10.210232 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:39:10.210610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:39:10.213194 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:39:10.224641 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:39:10.284294 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:39:10.284512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:39:10.286767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:39:10.288287 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:39:10.290092 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:39:10.296569 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:39:10.318997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:39:10.327666 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:39:10.355315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:39:10.356491 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:39:10.357711 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:39:10.359398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:39:10.359603 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:39:10.361759 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:39:10.363396 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:39:10.364907 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:39:10.366457 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:39:10.368036 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:39:10.369610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:39:10.371399 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:39:10.373033 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:39:10.374628 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:39:10.376200 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:39:10.377770 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:39:10.377986 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:39:10.380192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:39:10.381962 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:39:10.383491 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:39:10.384125 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:39:10.385826 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:39:10.386015 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:39:10.388271 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:39:10.388515 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:39:10.389965 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:39:10.390135 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:39:10.391573 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:39:10.391740 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:39:10.402721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:39:10.403690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:39:10.403932 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:39:10.410684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:39:10.412877 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:39:10.413263 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:39:10.415688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:39:10.417581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:39:10.427209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:39:10.427472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:39:10.430995 ignition[1018]: INFO : Ignition 2.19.0 Jan 24 00:39:10.430995 ignition[1018]: INFO : Stage: umount Jan 24 00:39:10.430995 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:39:10.430995 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:39:10.430995 ignition[1018]: INFO : umount: umount passed Jan 24 00:39:10.430995 ignition[1018]: INFO : Ignition finished successfully Jan 24 00:39:10.435777 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:39:10.436007 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:39:10.438285 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:39:10.440312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:39:10.443686 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:39:10.443776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:39:10.444535 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:39:10.444603 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:39:10.447452 systemd[1]: Stopped target network.target - Network. Jan 24 00:39:10.448624 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:39:10.448706 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:39:10.449414 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:39:10.450019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:39:10.456412 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:39:10.457082 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:39:10.457694 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:39:10.460420 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:39:10.460522 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:39:10.462597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:39:10.462678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:39:10.463364 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:39:10.463445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:39:10.464719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:39:10.464790 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:39:10.466390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:39:10.467948 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:39:10.471532 systemd-networkd[788]: eth1: DHCPv6 lease lost Jan 24 00:39:10.474270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:39:10.475596 systemd-networkd[788]: eth0: DHCPv6 lease lost Jan 24 00:39:10.479258 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:39:10.480210 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:39:10.484166 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:39:10.485191 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:39:10.487232 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:39:10.487425 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:39:10.490589 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:39:10.490678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:39:10.492142 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:39:10.492221 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:39:10.500490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:39:10.501160 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:39:10.501258 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:39:10.502025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:39:10.502097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:39:10.505027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:39:10.505111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:39:10.506515 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:39:10.506595 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:39:10.510476 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:39:10.527200 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:39:10.527461 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:39:10.546621 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:39:10.546954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:39:10.548364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:39:10.548449 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:39:10.549666 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:39:10.549746 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:39:10.551159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:39:10.551245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:39:10.553713 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:39:10.553793 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:39:10.555992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:39:10.556068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:39:10.564540 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:39:10.566125 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:39:10.566932 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:39:10.568787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:39:10.568890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:10.578740 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:39:10.579702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:39:10.581393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:39:10.587541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:39:10.611718 systemd[1]: Switching root. Jan 24 00:39:10.651128 systemd-journald[187]: Journal stopped Jan 24 00:39:12.269450 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 24 00:39:12.269509 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:39:12.269522 kernel: SELinux: policy capability open_perms=1 Jan 24 00:39:12.269538 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:39:12.269546 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:39:12.269557 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:39:12.269569 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:39:12.269580 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:39:12.269738 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:39:12.269748 kernel: audit: type=1403 audit(1769215150.912:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:39:12.269758 systemd[1]: Successfully loaded SELinux policy in 70.049ms. Jan 24 00:39:12.269773 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.966ms. Jan 24 00:39:12.269783 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:39:12.269792 systemd[1]: Detected virtualization kvm. Jan 24 00:39:12.269801 systemd[1]: Detected architecture x86-64. Jan 24 00:39:12.269820 systemd[1]: Detected first boot. Jan 24 00:39:12.269828 systemd[1]: Hostname set to . Jan 24 00:39:12.269837 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:39:12.269846 zram_generator::config[1061]: No configuration found. Jan 24 00:39:12.269856 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:39:12.269865 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:39:12.269874 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:39:12.269884 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:39:12.269895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:39:12.269905 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:39:12.269913 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:39:12.269922 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:39:12.269934 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:39:12.269943 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:39:12.269952 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:39:12.269960 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:39:12.269971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:39:12.269980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:39:12.269989 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:39:12.270002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:39:12.270011 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:39:12.270023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:39:12.270032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:39:12.270040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:39:12.270049 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:39:12.270060 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:39:12.270069 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:39:12.270078 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:39:12.270087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:39:12.270096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:39:12.270108 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:39:12.270119 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:39:12.270128 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:39:12.270136 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:39:12.270145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:39:12.270154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:39:12.270163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:39:12.270171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:39:12.270180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:39:12.270189 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:39:12.270198 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:39:12.270209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:12.270217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:39:12.270226 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:39:12.270235 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:39:12.270247 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:39:12.270255 systemd[1]: Reached target machines.target - Containers. Jan 24 00:39:12.270264 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:39:12.270273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:39:12.270285 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:39:12.270293 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:39:12.270302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:39:12.270311 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:39:12.270330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:39:12.270339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:39:12.270350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:39:12.270361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:39:12.270370 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:39:12.270379 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:39:12.270388 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:39:12.270396 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:39:12.270405 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:39:12.270414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:39:12.270423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:39:12.270431 kernel: fuse: init (API version 7.39) Jan 24 00:39:12.270442 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:39:12.270451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:39:12.270459 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:39:12.270468 systemd[1]: Stopped verity-setup.service. Jan 24 00:39:12.270477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:12.270486 kernel: ACPI: bus type drm_connector registered Jan 24 00:39:12.270494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:39:12.270503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:39:12.270512 kernel: loop: module loaded Jan 24 00:39:12.270523 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:39:12.270532 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:39:12.270540 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:39:12.270549 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:39:12.270558 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:39:12.270586 systemd-journald[1148]: Collecting audit messages is disabled. Jan 24 00:39:12.270603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:39:12.270616 systemd-journald[1148]: Journal started Jan 24 00:39:12.270632 systemd-journald[1148]: Runtime Journal (/run/log/journal/7b8275398aff442dbc9e4de0d8d60ed9) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:39:11.919528 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:39:11.947772 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:39:11.948727 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:39:12.274400 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:39:12.275233 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:39:12.275493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:39:12.276219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:39:12.276434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:39:12.277123 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:39:12.277339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:39:12.277998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:39:12.278175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:39:12.279105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:39:12.279283 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:39:12.279953 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:39:12.280131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:39:12.280789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:39:12.281446 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:39:12.282078 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:39:12.291405 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:39:12.305414 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:39:12.310432 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:39:12.311220 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:39:12.311249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:39:12.312901 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:39:12.317474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:39:12.324473 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:39:12.324965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:39:12.326276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:39:12.328441 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:39:12.328822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:39:12.330476 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:39:12.330870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:39:12.333449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:39:12.335150 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:39:12.338447 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:39:12.340171 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:39:12.341211 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:39:12.352237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:39:12.360432 kernel: loop0: detected capacity change from 0 to 8 Jan 24 00:39:12.370341 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:39:12.375623 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:39:12.376528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:39:12.385453 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:39:12.389422 systemd-journald[1148]: Time spent on flushing to /var/log/journal/7b8275398aff442dbc9e4de0d8d60ed9 is 55.999ms for 1189 entries. Jan 24 00:39:12.389422 systemd-journald[1148]: System Journal (/var/log/journal/7b8275398aff442dbc9e4de0d8d60ed9) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:39:12.465725 systemd-journald[1148]: Received client request to flush runtime journal. Jan 24 00:39:12.465757 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:39:12.465769 kernel: loop2: detected capacity change from 0 to 224512 Jan 24 00:39:12.437111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:39:12.447182 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:39:12.447771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:39:12.457400 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:39:12.467554 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:39:12.476110 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:39:12.483997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:39:12.485160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:39:12.487093 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:39:12.509349 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:39:12.521843 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 24 00:39:12.523107 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 24 00:39:12.530767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:39:12.551381 kernel: loop4: detected capacity change from 0 to 8 Jan 24 00:39:12.555344 kernel: loop5: detected capacity change from 0 to 142488 Jan 24 00:39:12.575348 kernel: loop6: detected capacity change from 0 to 224512 Jan 24 00:39:12.597353 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:39:12.613869 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 24 00:39:12.614446 (sd-merge)[1207]: Merged extensions into '/usr'. Jan 24 00:39:12.619342 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:39:12.619353 systemd[1]: Reloading... Jan 24 00:39:12.692350 zram_generator::config[1233]: No configuration found. Jan 24 00:39:12.820124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:12.857468 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:39:12.865393 systemd[1]: Reloading finished in 245 ms. Jan 24 00:39:12.903600 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:39:12.904839 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:39:12.906059 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:39:12.920585 systemd[1]: Starting ensure-sysext.service... Jan 24 00:39:12.923722 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:39:12.934623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:39:12.942411 systemd[1]: Reloading requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:39:12.942432 systemd[1]: Reloading... Jan 24 00:39:12.963273 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:39:12.963936 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:39:12.964824 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:39:12.965079 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 24 00:39:12.965183 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 24 00:39:12.967973 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:39:12.968050 systemd-tmpfiles[1278]: Skipping /boot Jan 24 00:39:12.982124 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:39:12.982240 systemd-tmpfiles[1278]: Skipping /boot Jan 24 00:39:12.989966 systemd-udevd[1279]: Using default interface naming scheme 'v255'. Jan 24 00:39:13.033350 zram_generator::config[1306]: No configuration found. Jan 24 00:39:13.191995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:13.203338 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1327) Jan 24 00:39:13.222707 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:39:13.250154 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:39:13.250833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:39:13.251565 systemd[1]: Reloading finished in 308 ms. Jan 24 00:39:13.272492 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:39:13.268034 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:39:13.268846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:39:13.276347 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:39:13.312579 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:39:13.312862 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:39:13.313000 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:39:13.313155 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:39:13.313274 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:39:13.313290 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:39:13.314388 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 24 00:39:13.317946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:13.320472 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:39:13.323355 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:39:13.325789 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:39:13.328567 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:39:13.328752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:39:13.332530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:39:13.334652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:39:13.337672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:39:13.337926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:39:13.341539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:39:13.345161 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:39:13.348609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:39:13.352689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:39:13.362955 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:39:13.359558 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:39:13.359655 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:13.365992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:13.367499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:39:13.371525 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:39:13.371698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:39:13.373113 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:39:13.373138 kernel: [drm] features: -context_init Jan 24 00:39:13.371784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:39:13.372277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:39:13.372437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:39:13.384142 kernel: [drm] number of scanouts: 1 Jan 24 00:39:13.384250 kernel: [drm] number of cap sets: 0 Jan 24 00:39:13.376412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:39:13.386385 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 24 00:39:13.393569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:13.410397 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:39:13.410488 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:39:13.422055 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:39:13.425004 systemd[1]: Finished ensure-sysext.service. Jan 24 00:39:13.426306 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:39:13.426935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:39:13.436688 augenrules[1418]: No rules Jan 24 00:39:13.447384 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:39:13.455640 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:39:13.458365 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:39:13.458743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:39:13.459263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:39:13.460654 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:39:13.461016 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:39:13.461144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:39:13.472492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:39:13.476794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:39:13.477249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:13.487618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:39:13.488372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:39:13.500987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:39:13.508614 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:39:13.511574 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:39:13.525045 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:39:13.550534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:39:13.551913 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:39:13.560590 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:39:13.561893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:39:13.586898 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:39:13.615673 systemd-resolved[1404]: Positive Trust Anchors: Jan 24 00:39:13.615688 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:39:13.615712 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:39:13.621876 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:39:13.622279 systemd-resolved[1404]: Using system hostname 'ci-4081-3-6-n-a6966cf543'. Jan 24 00:39:13.623438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:39:13.631415 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:39:13.632038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:39:13.633705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:39:13.634138 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:39:13.638205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:39:13.643887 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:39:13.645837 systemd-networkd[1403]: lo: Link UP Jan 24 00:39:13.646091 systemd-networkd[1403]: lo: Gained carrier Jan 24 00:39:13.646582 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:39:13.647042 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:39:13.648964 systemd-networkd[1403]: Enumeration completed Jan 24 00:39:13.649712 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:13.649794 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:39:13.651065 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:13.651128 systemd-networkd[1403]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:39:13.651129 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:39:13.651736 systemd-networkd[1403]: eth0: Link UP Jan 24 00:39:13.652206 systemd-networkd[1403]: eth0: Gained carrier Jan 24 00:39:13.652256 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:13.654181 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:39:13.654216 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:39:13.654625 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:39:13.655177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:39:13.655645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:39:13.656005 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:39:13.659372 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:39:13.659721 systemd-networkd[1403]: eth1: Link UP Jan 24 00:39:13.659730 systemd-networkd[1403]: eth1: Gained carrier Jan 24 00:39:13.659749 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:39:13.660607 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:39:13.664625 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:39:13.673482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:39:13.674661 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:39:13.675286 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:39:13.677845 systemd[1]: Reached target network.target - Network. Jan 24 00:39:13.678232 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:39:13.678614 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:39:13.679012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:39:13.679035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:39:13.686416 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:39:13.688471 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:39:13.691504 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:39:13.696430 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:39:13.700590 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:39:13.702758 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:39:13.702891 systemd-networkd[1403]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:39:13.705862 jq[1464]: false Jan 24 00:39:13.706162 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 24 00:39:13.708482 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:39:13.717495 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:39:13.722510 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 24 00:39:13.727387 systemd-networkd[1403]: eth0: DHCPv4 address 157.180.47.226/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:39:13.728258 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:39:13.733738 extend-filesystems[1465]: Found loop4 Jan 24 00:39:13.737452 extend-filesystems[1465]: Found loop5 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found loop6 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found loop7 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda1 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda2 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda3 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found usr Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda4 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda6 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda7 Jan 24 00:39:13.742416 extend-filesystems[1465]: Found sda9 Jan 24 00:39:13.742416 extend-filesystems[1465]: Checking size of /dev/sda9 Jan 24 00:39:13.737749 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:39:13.744488 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 24 00:39:13.756537 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:39:13.787070 dbus-daemon[1463]: [system] SELinux support is enabled Jan 24 00:39:13.772475 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:39:13.789782 coreos-metadata[1462]: Jan 24 00:39:13.781 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 24 00:39:13.789782 coreos-metadata[1462]: Jan 24 00:39:13.782 INFO Fetch successful Jan 24 00:39:13.789782 coreos-metadata[1462]: Jan 24 00:39:13.782 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 24 00:39:13.789782 coreos-metadata[1462]: Jan 24 00:39:13.784 INFO Fetch successful Jan 24 00:39:13.773880 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:39:13.774291 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:39:13.798559 jq[1487]: true Jan 24 00:39:13.783005 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:39:13.794438 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:39:13.797658 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:39:13.800288 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:39:13.811521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:39:13.812278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:39:13.812615 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:39:13.813062 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:39:13.825883 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:39:13.826080 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:39:13.835918 extend-filesystems[1465]: Resized partition /dev/sda9 Jan 24 00:39:13.846700 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:39:13.849043 update_engine[1484]: I20260124 00:39:13.841439 1484 main.cc:92] Flatcar Update Engine starting Jan 24 00:39:13.849043 update_engine[1484]: I20260124 00:39:13.846621 1484 update_check_scheduler.cc:74] Next update check in 11m19s Jan 24 00:39:13.858687 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 24 00:39:13.862412 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:39:13.862458 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:39:13.862909 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:39:13.862928 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:39:13.866139 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:39:13.878387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:39:13.884067 systemd-logind[1476]: New seat seat0. Jan 24 00:39:13.887409 systemd-logind[1476]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:39:13.887427 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:39:13.888754 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:39:13.888956 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:39:13.897015 jq[1496]: true Jan 24 00:39:13.908719 tar[1495]: linux-amd64/LICENSE Jan 24 00:39:13.912662 tar[1495]: linux-amd64/helm Jan 24 00:39:13.943780 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1310) Jan 24 00:39:13.991462 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:39:13.995593 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:39:14.065213 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:39:14.080265 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:39:14.080877 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:39:14.095603 systemd[1]: Starting sshkeys.service... Jan 24 00:39:14.104628 containerd[1508]: time="2026-01-24T00:39:14.103607149Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:39:14.115068 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:39:14.126610 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:39:14.161221 containerd[1508]: time="2026-01-24T00:39:14.161162533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.164850185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.164878575Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.164891305Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.165032655Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.165044345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.165094945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:39:14.165160 containerd[1508]: time="2026-01-24T00:39:14.165102465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165555185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165571055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165582215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165589435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165667455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166265 containerd[1508]: time="2026-01-24T00:39:14.165861255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166560 containerd[1508]: time="2026-01-24T00:39:14.166545926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:39:14.166760 containerd[1508]: time="2026-01-24T00:39:14.166751216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:39:14.166883 containerd[1508]: time="2026-01-24T00:39:14.166872006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:39:14.167266 containerd[1508]: time="2026-01-24T00:39:14.167164936Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:39:14.170900 coreos-metadata[1548]: Jan 24 00:39:14.170 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 24 00:39:14.175962 coreos-metadata[1548]: Jan 24 00:39:14.172 INFO Fetch successful Jan 24 00:39:14.186396 unknown[1548]: wrote ssh authorized keys file for user: core Jan 24 00:39:14.188529 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 24 00:39:14.212586 containerd[1508]: time="2026-01-24T00:39:14.212555635Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:39:14.212645 containerd[1508]: time="2026-01-24T00:39:14.212608955Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:39:14.212662 extend-filesystems[1504]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:39:14.212662 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:39:14.212662 extend-filesystems[1504]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 24 00:39:14.214905 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Jan 24 00:39:14.214905 extend-filesystems[1465]: Found sr0 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.213414145Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.213462765Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.213475225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.214551396Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216201816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216589727Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216603577Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216613677Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216630077Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216640277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216663157Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216673697Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216684987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221338 containerd[1508]: time="2026-01-24T00:39:14.216696127Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.215699 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216705807Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216716007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216788107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216801747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216811217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216831907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216840747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216981977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.216991567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.217001277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.217011037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.217023107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.217031827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221610 containerd[1508]: time="2026-01-24T00:39:14.217040347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.215904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217095877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217115927Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217140047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217148707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217368327Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217430617Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217445947Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217453887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217463297Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217917577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217931237Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217941577Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:39:14.221867 containerd[1508]: time="2026-01-24T00:39:14.217954607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:39:14.222029 containerd[1508]: time="2026-01-24T00:39:14.218185957Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:39:14.222029 containerd[1508]: time="2026-01-24T00:39:14.218251347Z" level=info msg="Connect containerd service" Jan 24 00:39:14.222029 containerd[1508]: time="2026-01-24T00:39:14.218280327Z" level=info msg="using legacy CRI server" Jan 24 00:39:14.222029 containerd[1508]: time="2026-01-24T00:39:14.218285327Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:39:14.222029 containerd[1508]: time="2026-01-24T00:39:14.218381457Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.225073650Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.225857630Z" level=info msg="Start subscribing containerd event" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.225892160Z" level=info msg="Start recovering state" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.226020410Z" level=info msg="Start event monitor" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.226228781Z" level=info msg="Start snapshots syncer" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.226239661Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:39:14.226382 containerd[1508]: time="2026-01-24T00:39:14.226249571Z" level=info msg="Start streaming server" Jan 24 00:39:14.227525 containerd[1508]: time="2026-01-24T00:39:14.227507281Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:39:14.227852 containerd[1508]: time="2026-01-24T00:39:14.227699891Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:39:14.228149 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:39:14.228961 containerd[1508]: time="2026-01-24T00:39:14.228941442Z" level=info msg="containerd successfully booted in 0.127463s" Jan 24 00:39:14.232023 update-ssh-keys[1554]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:39:14.233167 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:39:14.236547 systemd[1]: Finished sshkeys.service. Jan 24 00:39:14.256157 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:39:14.276459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:39:14.286667 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:39:14.295956 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:39:14.297354 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:39:14.306567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:39:14.318639 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:39:14.325697 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:39:14.332649 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:39:14.335169 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:39:14.523442 tar[1495]: linux-amd64/README.md Jan 24 00:39:14.534096 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:39:14.932651 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 24 00:39:14.933548 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 24 00:39:14.936885 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:39:14.938887 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:39:14.948633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:14.953554 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:39:14.992610 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:39:15.188954 systemd-networkd[1403]: eth1: Gained IPv6LL Jan 24 00:39:15.189698 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 24 00:39:15.637172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:15.638310 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:39:15.641693 systemd[1]: Startup finished in 1.558s (kernel) + 6.063s (initrd) + 4.797s (userspace) = 12.419s. Jan 24 00:39:15.642886 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:39:16.219228 kubelet[1596]: E0124 00:39:16.219140 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:39:16.225698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:39:16.226162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:39:19.089437 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:39:19.097896 systemd[1]: Started sshd@0-157.180.47.226:22-20.161.92.111:45322.service - OpenSSH per-connection server daemon (20.161.92.111:45322). Jan 24 00:39:19.875800 sshd[1608]: Accepted publickey for core from 20.161.92.111 port 45322 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:19.879475 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:19.895788 systemd-logind[1476]: New session 1 of user core. Jan 24 00:39:19.898766 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:39:19.905142 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:39:19.933773 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:39:19.943773 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:39:19.949139 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:39:20.094081 systemd[1612]: Queued start job for default target default.target. Jan 24 00:39:20.104491 systemd[1612]: Created slice app.slice - User Application Slice. Jan 24 00:39:20.104518 systemd[1612]: Reached target paths.target - Paths. Jan 24 00:39:20.104530 systemd[1612]: Reached target timers.target - Timers. Jan 24 00:39:20.105997 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:39:20.143172 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:39:20.143654 systemd[1612]: Reached target sockets.target - Sockets. Jan 24 00:39:20.143807 systemd[1612]: Reached target basic.target - Basic System. Jan 24 00:39:20.143996 systemd[1612]: Reached target default.target - Main User Target. Jan 24 00:39:20.144071 systemd[1612]: Startup finished in 182ms. Jan 24 00:39:20.144470 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:39:20.154610 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:39:20.705758 systemd[1]: Started sshd@1-157.180.47.226:22-20.161.92.111:45326.service - OpenSSH per-connection server daemon (20.161.92.111:45326). Jan 24 00:39:21.477558 sshd[1623]: Accepted publickey for core from 20.161.92.111 port 45326 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:21.480259 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:21.487851 systemd-logind[1476]: New session 2 of user core. Jan 24 00:39:21.498560 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:39:22.014747 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:22.019519 systemd[1]: sshd@1-157.180.47.226:22-20.161.92.111:45326.service: Deactivated successfully. Jan 24 00:39:22.022617 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:39:22.025020 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:39:22.026611 systemd-logind[1476]: Removed session 2. Jan 24 00:39:22.152767 systemd[1]: Started sshd@2-157.180.47.226:22-20.161.92.111:45332.service - OpenSSH per-connection server daemon (20.161.92.111:45332). Jan 24 00:39:22.917976 sshd[1630]: Accepted publickey for core from 20.161.92.111 port 45332 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:22.920253 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:22.926596 systemd-logind[1476]: New session 3 of user core. Jan 24 00:39:22.933545 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:39:23.447427 sshd[1630]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:23.454049 systemd[1]: sshd@2-157.180.47.226:22-20.161.92.111:45332.service: Deactivated successfully. Jan 24 00:39:23.457501 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:39:23.458769 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:39:23.460183 systemd-logind[1476]: Removed session 3. Jan 24 00:39:23.587774 systemd[1]: Started sshd@3-157.180.47.226:22-20.161.92.111:34464.service - OpenSSH per-connection server daemon (20.161.92.111:34464). Jan 24 00:39:24.338890 sshd[1637]: Accepted publickey for core from 20.161.92.111 port 34464 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:24.341623 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:24.350423 systemd-logind[1476]: New session 4 of user core. Jan 24 00:39:24.356588 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:39:24.876439 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:24.880084 systemd[1]: sshd@3-157.180.47.226:22-20.161.92.111:34464.service: Deactivated successfully. Jan 24 00:39:24.882219 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:39:24.883715 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:39:24.885031 systemd-logind[1476]: Removed session 4. Jan 24 00:39:25.011858 systemd[1]: Started sshd@4-157.180.47.226:22-20.161.92.111:34476.service - OpenSSH per-connection server daemon (20.161.92.111:34476). Jan 24 00:39:25.781207 sshd[1644]: Accepted publickey for core from 20.161.92.111 port 34476 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:25.783933 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:25.792404 systemd-logind[1476]: New session 5 of user core. Jan 24 00:39:25.799576 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:39:26.209185 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:39:26.209906 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:39:26.230490 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:26.242310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:39:26.248601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:26.356989 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:26.366259 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:39:26.367776 systemd[1]: sshd@4-157.180.47.226:22-20.161.92.111:34476.service: Deactivated successfully. Jan 24 00:39:26.375302 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:39:26.381012 systemd-logind[1476]: Removed session 5. Jan 24 00:39:26.427705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:26.431021 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:39:26.463674 kubelet[1659]: E0124 00:39:26.463488 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:39:26.472573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:39:26.472743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:39:26.491127 systemd[1]: Started sshd@5-157.180.47.226:22-20.161.92.111:34484.service - OpenSSH per-connection server daemon (20.161.92.111:34484). Jan 24 00:39:27.246508 sshd[1667]: Accepted publickey for core from 20.161.92.111 port 34484 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:27.249316 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:27.258287 systemd-logind[1476]: New session 6 of user core. Jan 24 00:39:27.264593 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:39:27.661280 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:39:27.661704 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:39:27.667464 sudo[1671]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:27.678715 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:39:27.679434 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:39:27.697724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:39:27.700982 auditctl[1674]: No rules Jan 24 00:39:27.703060 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:39:27.703517 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:39:27.711318 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:39:27.746357 augenrules[1692]: No rules Jan 24 00:39:27.748459 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:39:27.750846 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:27.874491 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:27.879554 systemd[1]: sshd@5-157.180.47.226:22-20.161.92.111:34484.service: Deactivated successfully. Jan 24 00:39:27.882903 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:39:27.885448 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:39:27.887685 systemd-logind[1476]: Removed session 6. Jan 24 00:39:28.016240 systemd[1]: Started sshd@6-157.180.47.226:22-20.161.92.111:34498.service - OpenSSH per-connection server daemon (20.161.92.111:34498). Jan 24 00:39:28.777929 sshd[1700]: Accepted publickey for core from 20.161.92.111 port 34498 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:39:28.780707 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:28.789421 systemd-logind[1476]: New session 7 of user core. Jan 24 00:39:28.802619 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:39:29.194781 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:39:29.195675 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:39:29.626729 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:39:29.627434 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:39:30.069146 dockerd[1719]: time="2026-01-24T00:39:30.069013639Z" level=info msg="Starting up" Jan 24 00:39:30.237586 dockerd[1719]: time="2026-01-24T00:39:30.237099509Z" level=info msg="Loading containers: start." Jan 24 00:39:30.453425 kernel: Initializing XFRM netlink socket Jan 24 00:39:30.502196 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 24 00:39:30.616735 systemd-networkd[1403]: docker0: Link UP Jan 24 00:39:30.635725 systemd-timesyncd[1425]: Contacted time server 168.119.211.223:123 (2.flatcar.pool.ntp.org). Jan 24 00:39:30.635846 systemd-timesyncd[1425]: Initial clock synchronization to Sat 2026-01-24 00:39:30.920286 UTC. Jan 24 00:39:30.641540 dockerd[1719]: time="2026-01-24T00:39:30.641457987Z" level=info msg="Loading containers: done." Jan 24 00:39:30.673369 dockerd[1719]: time="2026-01-24T00:39:30.673264121Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:39:30.673706 dockerd[1719]: time="2026-01-24T00:39:30.673487031Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:39:30.673780 dockerd[1719]: time="2026-01-24T00:39:30.673720031Z" level=info msg="Daemon has completed initialization" Jan 24 00:39:30.728155 dockerd[1719]: time="2026-01-24T00:39:30.728053353Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:39:30.730227 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:39:31.985774 containerd[1508]: time="2026-01-24T00:39:31.985689622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:39:32.666046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320961787.mount: Deactivated successfully. Jan 24 00:39:33.720376 containerd[1508]: time="2026-01-24T00:39:33.720306964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:33.721326 containerd[1508]: time="2026-01-24T00:39:33.721167959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070747" Jan 24 00:39:33.722302 containerd[1508]: time="2026-01-24T00:39:33.722056238Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:33.724239 containerd[1508]: time="2026-01-24T00:39:33.724206928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:33.725098 containerd[1508]: time="2026-01-24T00:39:33.725069793Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.739322045s" Jan 24 00:39:33.725145 containerd[1508]: time="2026-01-24T00:39:33.725098793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:39:33.725837 containerd[1508]: time="2026-01-24T00:39:33.725814871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:39:35.039831 containerd[1508]: time="2026-01-24T00:39:35.039775009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:35.040882 containerd[1508]: time="2026-01-24T00:39:35.040749938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993376" Jan 24 00:39:35.042933 containerd[1508]: time="2026-01-24T00:39:35.041765732Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:35.043951 containerd[1508]: time="2026-01-24T00:39:35.043719671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:35.044408 containerd[1508]: time="2026-01-24T00:39:35.044389050Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.31844103s" Jan 24 00:39:35.044449 containerd[1508]: time="2026-01-24T00:39:35.044412490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:39:35.044739 containerd[1508]: time="2026-01-24T00:39:35.044722755Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:39:36.186256 containerd[1508]: time="2026-01-24T00:39:36.186210519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:36.187021 containerd[1508]: time="2026-01-24T00:39:36.186990298Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405098" Jan 24 00:39:36.188341 containerd[1508]: time="2026-01-24T00:39:36.187674076Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:36.189684 containerd[1508]: time="2026-01-24T00:39:36.189656705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:36.190483 containerd[1508]: time="2026-01-24T00:39:36.190369310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.145625429s" Jan 24 00:39:36.190483 containerd[1508]: time="2026-01-24T00:39:36.190391620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:39:36.190776 containerd[1508]: time="2026-01-24T00:39:36.190759714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:39:36.492125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:39:36.500734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:36.629241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:36.633033 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:39:36.663145 kubelet[1931]: E0124 00:39:36.663077 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:39:36.668906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:39:36.669087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:39:37.344511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433254258.mount: Deactivated successfully. Jan 24 00:39:37.705913 containerd[1508]: time="2026-01-24T00:39:37.705773389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:37.706744 containerd[1508]: time="2026-01-24T00:39:37.706702863Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161927" Jan 24 00:39:37.707938 containerd[1508]: time="2026-01-24T00:39:37.707902964Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:37.709754 containerd[1508]: time="2026-01-24T00:39:37.709722796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:37.710211 containerd[1508]: time="2026-01-24T00:39:37.710181238Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.519329289s" Jan 24 00:39:37.710272 containerd[1508]: time="2026-01-24T00:39:37.710261135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:39:37.710882 containerd[1508]: time="2026-01-24T00:39:37.710855984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:39:38.230727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807734803.mount: Deactivated successfully. Jan 24 00:39:38.985344 containerd[1508]: time="2026-01-24T00:39:38.985278990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:38.986541 containerd[1508]: time="2026-01-24T00:39:38.986363925Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jan 24 00:39:38.987671 containerd[1508]: time="2026-01-24T00:39:38.987403271Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:38.989924 containerd[1508]: time="2026-01-24T00:39:38.989889201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:38.990730 containerd[1508]: time="2026-01-24T00:39:38.990698038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.279816323s" Jan 24 00:39:38.990799 containerd[1508]: time="2026-01-24T00:39:38.990787928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:39:38.991262 containerd[1508]: time="2026-01-24T00:39:38.991136780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:39:39.469240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278931701.mount: Deactivated successfully. Jan 24 00:39:39.476065 containerd[1508]: time="2026-01-24T00:39:39.475971126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:39.477309 containerd[1508]: time="2026-01-24T00:39:39.476894278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 24 00:39:39.478466 containerd[1508]: time="2026-01-24T00:39:39.478373036Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:39.480873 containerd[1508]: time="2026-01-24T00:39:39.480810192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:39.482748 containerd[1508]: time="2026-01-24T00:39:39.481650168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.254819ms" Jan 24 00:39:39.482748 containerd[1508]: time="2026-01-24T00:39:39.481688532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:39:39.483406 containerd[1508]: time="2026-01-24T00:39:39.483373979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:39:40.025727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687638243.mount: Deactivated successfully. Jan 24 00:39:41.644472 containerd[1508]: time="2026-01-24T00:39:41.644413643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:41.645502 containerd[1508]: time="2026-01-24T00:39:41.645292703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Jan 24 00:39:41.647685 containerd[1508]: time="2026-01-24T00:39:41.647388024Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:41.649945 containerd[1508]: time="2026-01-24T00:39:41.649917344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:41.650658 containerd[1508]: time="2026-01-24T00:39:41.650637803Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.16714638s" Jan 24 00:39:41.650721 containerd[1508]: time="2026-01-24T00:39:41.650709690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:39:43.949883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:43.960546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:43.986442 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Jan 24 00:39:43.986587 systemd[1]: Reloading... Jan 24 00:39:44.128407 zram_generator::config[2128]: No configuration found. Jan 24 00:39:44.213807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:44.278501 systemd[1]: Reloading finished in 291 ms. Jan 24 00:39:44.321963 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:39:44.322056 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:39:44.322287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:44.327827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:44.458614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:44.469759 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:39:44.504296 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:44.506366 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:39:44.506366 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:44.506366 kubelet[2177]: I0124 00:39:44.504797 2177 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:39:44.898158 kubelet[2177]: I0124 00:39:44.898127 2177 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:39:44.898696 kubelet[2177]: I0124 00:39:44.898274 2177 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:39:44.898696 kubelet[2177]: I0124 00:39:44.898587 2177 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:39:44.919666 kubelet[2177]: E0124 00:39:44.919604 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.47.226:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:44.920076 kubelet[2177]: I0124 00:39:44.920060 2177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:39:44.929167 kubelet[2177]: E0124 00:39:44.929127 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:39:44.929223 kubelet[2177]: I0124 00:39:44.929170 2177 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:39:44.936608 kubelet[2177]: I0124 00:39:44.936563 2177 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:39:44.939009 kubelet[2177]: I0124 00:39:44.938944 2177 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:39:44.939290 kubelet[2177]: I0124 00:39:44.939014 2177 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a6966cf543","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:39:44.939363 kubelet[2177]: I0124 00:39:44.939314 2177 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:39:44.939402 kubelet[2177]: I0124 00:39:44.939381 2177 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:39:44.939654 kubelet[2177]: I0124 00:39:44.939627 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:44.946921 kubelet[2177]: I0124 00:39:44.946896 2177 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:39:44.947066 kubelet[2177]: I0124 00:39:44.946957 2177 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:39:44.947066 kubelet[2177]: I0124 00:39:44.946989 2177 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:39:44.947066 kubelet[2177]: I0124 00:39:44.947005 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:39:44.951059 kubelet[2177]: W0124 00:39:44.951034 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.47.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a6966cf543&limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:44.951534 kubelet[2177]: E0124 00:39:44.951127 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.47.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a6966cf543&limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:44.951534 kubelet[2177]: W0124 00:39:44.951349 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.47.226:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:44.951534 kubelet[2177]: E0124 00:39:44.951369 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.47.226:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:44.951699 kubelet[2177]: I0124 00:39:44.951674 2177 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:39:44.952069 kubelet[2177]: I0124 00:39:44.952060 2177 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:39:44.952765 kubelet[2177]: W0124 00:39:44.952755 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:39:44.954463 kubelet[2177]: I0124 00:39:44.954392 2177 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:39:44.954463 kubelet[2177]: I0124 00:39:44.954416 2177 server.go:1287] "Started kubelet" Jan 24 00:39:44.963765 kubelet[2177]: I0124 00:39:44.963626 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:39:44.967776 kubelet[2177]: E0124 00:39:44.966762 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.47.226:6443/api/v1/namespaces/default/events\": dial tcp 157.180.47.226:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-a6966cf543.188d83d6466f294b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-a6966cf543,UID:ci-4081-3-6-n-a6966cf543,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a6966cf543,},FirstTimestamp:2026-01-24 00:39:44.954403147 +0000 UTC m=+0.480812111,LastTimestamp:2026-01-24 00:39:44.954403147 +0000 UTC m=+0.480812111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a6966cf543,}" Jan 24 00:39:44.970546 kubelet[2177]: I0124 00:39:44.970191 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:39:44.972166 kubelet[2177]: I0124 00:39:44.972155 2177 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:39:44.972315 kubelet[2177]: E0124 00:39:44.972304 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a6966cf543\" not found" Jan 24 00:39:44.972732 kubelet[2177]: I0124 00:39:44.972721 2177 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:39:44.972802 kubelet[2177]: I0124 00:39:44.972795 2177 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:39:44.974034 kubelet[2177]: W0124 00:39:44.973754 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.47.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:44.974034 kubelet[2177]: E0124 00:39:44.973794 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.47.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:44.974034 kubelet[2177]: E0124 00:39:44.973826 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.47.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a6966cf543?timeout=10s\": dial tcp 157.180.47.226:6443: connect: connection refused" interval="200ms" Jan 24 00:39:44.974520 kubelet[2177]: I0124 00:39:44.974509 2177 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:39:44.974619 kubelet[2177]: I0124 00:39:44.974608 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:39:44.975529 kubelet[2177]: I0124 00:39:44.975513 2177 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:39:44.976402 kubelet[2177]: I0124 00:39:44.976141 2177 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:39:44.976831 kubelet[2177]: E0124 00:39:44.976820 2177 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:39:44.976946 kubelet[2177]: I0124 00:39:44.976938 2177 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:39:44.979393 kubelet[2177]: I0124 00:39:44.979295 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:39:44.979726 kubelet[2177]: I0124 00:39:44.979682 2177 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:39:44.993555 kubelet[2177]: I0124 00:39:44.993532 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:39:44.994831 kubelet[2177]: I0124 00:39:44.994818 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:39:44.994903 kubelet[2177]: I0124 00:39:44.994896 2177 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:39:44.995172 kubelet[2177]: I0124 00:39:44.995163 2177 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:39:44.995213 kubelet[2177]: I0124 00:39:44.995207 2177 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:39:44.995288 kubelet[2177]: E0124 00:39:44.995277 2177 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:39:45.004225 kubelet[2177]: W0124 00:39:45.004191 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.47.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:45.004301 kubelet[2177]: E0124 00:39:45.004291 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.47.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:45.007194 kubelet[2177]: I0124 00:39:45.007183 2177 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:39:45.007254 kubelet[2177]: I0124 00:39:45.007247 2177 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:39:45.007291 kubelet[2177]: I0124 00:39:45.007285 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:45.009897 kubelet[2177]: I0124 00:39:45.009877 2177 policy_none.go:49] "None policy: Start" Jan 24 00:39:45.009967 kubelet[2177]: I0124 00:39:45.009960 2177 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:39:45.010000 kubelet[2177]: I0124 00:39:45.009995 2177 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:39:45.014946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:39:45.026222 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:39:45.029386 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:39:45.041302 kubelet[2177]: I0124 00:39:45.041261 2177 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:39:45.041606 kubelet[2177]: I0124 00:39:45.041582 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:39:45.041643 kubelet[2177]: I0124 00:39:45.041608 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:39:45.043004 kubelet[2177]: I0124 00:39:45.042479 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:39:45.044654 kubelet[2177]: E0124 00:39:45.044607 2177 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:39:45.044685 kubelet[2177]: E0124 00:39:45.044655 2177 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-a6966cf543\" not found" Jan 24 00:39:45.107626 systemd[1]: Created slice kubepods-burstable-podc6366daadf73f8b939916777fa13e28a.slice - libcontainer container kubepods-burstable-podc6366daadf73f8b939916777fa13e28a.slice. Jan 24 00:39:45.125812 kubelet[2177]: E0124 00:39:45.125776 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.131938 systemd[1]: Created slice kubepods-burstable-podb44fae2bc54f1263d9840361e52890bd.slice - libcontainer container kubepods-burstable-podb44fae2bc54f1263d9840361e52890bd.slice. Jan 24 00:39:45.134390 kubelet[2177]: E0124 00:39:45.134250 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.139275 systemd[1]: Created slice kubepods-burstable-pod3f698071c0c463875930b7335a05026c.slice - libcontainer container kubepods-burstable-pod3f698071c0c463875930b7335a05026c.slice. Jan 24 00:39:45.140714 kubelet[2177]: E0124 00:39:45.140683 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.143784 kubelet[2177]: I0124 00:39:45.143692 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.143950 kubelet[2177]: E0124 00:39:45.143924 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.47.226:6443/api/v1/nodes\": dial tcp 157.180.47.226:6443: connect: connection refused" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.174756 kubelet[2177]: E0124 00:39:45.174657 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.47.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a6966cf543?timeout=10s\": dial tcp 157.180.47.226:6443: connect: connection refused" interval="400ms" Jan 24 00:39:45.274497 kubelet[2177]: I0124 00:39:45.274439 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274497 kubelet[2177]: I0124 00:39:45.274493 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274763 kubelet[2177]: I0124 00:39:45.274529 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274763 kubelet[2177]: I0124 00:39:45.274552 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274763 kubelet[2177]: I0124 00:39:45.274577 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274763 kubelet[2177]: I0124 00:39:45.274602 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274763 kubelet[2177]: I0124 00:39:45.274653 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274858 kubelet[2177]: I0124 00:39:45.274703 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.274858 kubelet[2177]: I0124 00:39:45.274729 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f698071c0c463875930b7335a05026c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a6966cf543\" (UID: \"3f698071c0c463875930b7335a05026c\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.346185 kubelet[2177]: I0124 00:39:45.346129 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.346355 kubelet[2177]: E0124 00:39:45.346345 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.47.226:6443/api/v1/nodes\": dial tcp 157.180.47.226:6443: connect: connection refused" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.427570 containerd[1508]: time="2026-01-24T00:39:45.427458182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a6966cf543,Uid:c6366daadf73f8b939916777fa13e28a,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:45.436031 containerd[1508]: time="2026-01-24T00:39:45.435845360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a6966cf543,Uid:b44fae2bc54f1263d9840361e52890bd,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:45.441770 containerd[1508]: time="2026-01-24T00:39:45.441725823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a6966cf543,Uid:3f698071c0c463875930b7335a05026c,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:45.575944 kubelet[2177]: E0124 00:39:45.575853 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.47.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a6966cf543?timeout=10s\": dial tcp 157.180.47.226:6443: connect: connection refused" interval="800ms" Jan 24 00:39:45.754285 kubelet[2177]: I0124 00:39:45.753901 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.754603 kubelet[2177]: E0124 00:39:45.754521 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.47.226:6443/api/v1/nodes\": dial tcp 157.180.47.226:6443: connect: connection refused" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:45.895145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949998883.mount: Deactivated successfully. Jan 24 00:39:45.912375 containerd[1508]: time="2026-01-24T00:39:45.910389640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:45.912375 containerd[1508]: time="2026-01-24T00:39:45.911790976Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:45.913299 containerd[1508]: time="2026-01-24T00:39:45.913225441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 24 00:39:45.915194 containerd[1508]: time="2026-01-24T00:39:45.915143952Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:45.916924 containerd[1508]: time="2026-01-24T00:39:45.916826996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:39:45.919355 containerd[1508]: time="2026-01-24T00:39:45.917365427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:45.919355 containerd[1508]: time="2026-01-24T00:39:45.918790261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:39:45.922442 containerd[1508]: time="2026-01-24T00:39:45.922389351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:45.924668 containerd[1508]: time="2026-01-24T00:39:45.924621364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.05837ms" Jan 24 00:39:45.926352 containerd[1508]: time="2026-01-24T00:39:45.926276808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.388031ms" Jan 24 00:39:45.928936 containerd[1508]: time="2026-01-24T00:39:45.928820282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.000858ms" Jan 24 00:39:46.095115 containerd[1508]: time="2026-01-24T00:39:46.093560120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:46.095115 containerd[1508]: time="2026-01-24T00:39:46.093643379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:46.095115 containerd[1508]: time="2026-01-24T00:39:46.093662892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.095115 containerd[1508]: time="2026-01-24T00:39:46.093795014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.098549 containerd[1508]: time="2026-01-24T00:39:46.098373745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:46.098815 containerd[1508]: time="2026-01-24T00:39:46.098747613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:46.100582 containerd[1508]: time="2026-01-24T00:39:46.100386764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.101907 containerd[1508]: time="2026-01-24T00:39:46.100982111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.102647 containerd[1508]: time="2026-01-24T00:39:46.102196099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:46.102647 containerd[1508]: time="2026-01-24T00:39:46.102252438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:46.102647 containerd[1508]: time="2026-01-24T00:39:46.102272484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.102647 containerd[1508]: time="2026-01-24T00:39:46.102422069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:46.129710 kubelet[2177]: W0124 00:39:46.129488 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.47.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:46.129710 kubelet[2177]: E0124 00:39:46.129564 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.47.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:46.140530 systemd[1]: Started cri-containerd-c99acbc80a9ce435efa8e7fabfea7942934ff00a90b8d1179fced65259770e6d.scope - libcontainer container c99acbc80a9ce435efa8e7fabfea7942934ff00a90b8d1179fced65259770e6d. Jan 24 00:39:46.144591 systemd[1]: Started cri-containerd-34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91.scope - libcontainer container 34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91. Jan 24 00:39:46.146403 systemd[1]: Started cri-containerd-7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf.scope - libcontainer container 7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf. Jan 24 00:39:46.196892 containerd[1508]: time="2026-01-24T00:39:46.196805485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a6966cf543,Uid:b44fae2bc54f1263d9840361e52890bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91\"" Jan 24 00:39:46.202310 containerd[1508]: time="2026-01-24T00:39:46.202200128Z" level=info msg="CreateContainer within sandbox \"34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:39:46.206272 containerd[1508]: time="2026-01-24T00:39:46.205622077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a6966cf543,Uid:c6366daadf73f8b939916777fa13e28a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99acbc80a9ce435efa8e7fabfea7942934ff00a90b8d1179fced65259770e6d\"" Jan 24 00:39:46.208401 containerd[1508]: time="2026-01-24T00:39:46.208386610Z" level=info msg="CreateContainer within sandbox \"c99acbc80a9ce435efa8e7fabfea7942934ff00a90b8d1179fced65259770e6d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:39:46.213509 containerd[1508]: time="2026-01-24T00:39:46.213492301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a6966cf543,Uid:3f698071c0c463875930b7335a05026c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf\"" Jan 24 00:39:46.216067 containerd[1508]: time="2026-01-24T00:39:46.215993485Z" level=info msg="CreateContainer within sandbox \"7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:39:46.219678 containerd[1508]: time="2026-01-24T00:39:46.219651361Z" level=info msg="CreateContainer within sandbox \"34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce\"" Jan 24 00:39:46.220146 containerd[1508]: time="2026-01-24T00:39:46.220124103Z" level=info msg="StartContainer for \"7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce\"" Jan 24 00:39:46.228016 containerd[1508]: time="2026-01-24T00:39:46.227997302Z" level=info msg="CreateContainer within sandbox \"c99acbc80a9ce435efa8e7fabfea7942934ff00a90b8d1179fced65259770e6d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d41748920f34b419e231abb26da8e8f76bbe2a64bf3b9d8c285f4490f91178a\"" Jan 24 00:39:46.230110 containerd[1508]: time="2026-01-24T00:39:46.230091931Z" level=info msg="StartContainer for \"8d41748920f34b419e231abb26da8e8f76bbe2a64bf3b9d8c285f4490f91178a\"" Jan 24 00:39:46.231640 kubelet[2177]: W0124 00:39:46.231593 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.47.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a6966cf543&limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:46.231688 kubelet[2177]: E0124 00:39:46.231646 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.47.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a6966cf543&limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:46.234186 containerd[1508]: time="2026-01-24T00:39:46.233751736Z" level=info msg="CreateContainer within sandbox \"7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1\"" Jan 24 00:39:46.234560 containerd[1508]: time="2026-01-24T00:39:46.234541836Z" level=info msg="StartContainer for \"505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1\"" Jan 24 00:39:46.243428 systemd[1]: Started cri-containerd-7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce.scope - libcontainer container 7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce. Jan 24 00:39:46.268434 systemd[1]: Started cri-containerd-8d41748920f34b419e231abb26da8e8f76bbe2a64bf3b9d8c285f4490f91178a.scope - libcontainer container 8d41748920f34b419e231abb26da8e8f76bbe2a64bf3b9d8c285f4490f91178a. Jan 24 00:39:46.272074 systemd[1]: Started cri-containerd-505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1.scope - libcontainer container 505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1. Jan 24 00:39:46.286315 containerd[1508]: time="2026-01-24T00:39:46.286283949Z" level=info msg="StartContainer for \"7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce\" returns successfully" Jan 24 00:39:46.320194 containerd[1508]: time="2026-01-24T00:39:46.320151113Z" level=info msg="StartContainer for \"8d41748920f34b419e231abb26da8e8f76bbe2a64bf3b9d8c285f4490f91178a\" returns successfully" Jan 24 00:39:46.353262 kubelet[2177]: W0124 00:39:46.352390 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.47.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.47.226:6443: connect: connection refused Jan 24 00:39:46.353262 kubelet[2177]: E0124 00:39:46.352456 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.47.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.47.226:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:46.368507 containerd[1508]: time="2026-01-24T00:39:46.368456096Z" level=info msg="StartContainer for \"505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1\" returns successfully" Jan 24 00:39:46.376435 kubelet[2177]: E0124 00:39:46.376397 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.47.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a6966cf543?timeout=10s\": dial tcp 157.180.47.226:6443: connect: connection refused" interval="1.6s" Jan 24 00:39:46.558606 kubelet[2177]: I0124 00:39:46.558577 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:47.015284 kubelet[2177]: E0124 00:39:47.015254 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:47.017282 kubelet[2177]: E0124 00:39:47.017042 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:47.019411 kubelet[2177]: E0124 00:39:47.018002 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.023720 kubelet[2177]: E0124 00:39:48.023676 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.024382 kubelet[2177]: E0124 00:39:48.024177 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.086312 kubelet[2177]: E0124 00:39:48.086124 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.554440 kubelet[2177]: E0124 00:39:48.554398 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-a6966cf543\" not found" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.617509 kubelet[2177]: I0124 00:39:48.617390 2177 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.674051 kubelet[2177]: I0124 00:39:48.673629 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.683919 kubelet[2177]: E0124 00:39:48.683882 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.684138 kubelet[2177]: I0124 00:39:48.684062 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.685895 kubelet[2177]: E0124 00:39:48.685755 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.685895 kubelet[2177]: I0124 00:39:48.685774 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.688115 kubelet[2177]: E0124 00:39:48.688083 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-a6966cf543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:48.953985 kubelet[2177]: I0124 00:39:48.953738 2177 apiserver.go:52] "Watching apiserver" Jan 24 00:39:48.973256 kubelet[2177]: I0124 00:39:48.973197 2177 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:39:49.022564 kubelet[2177]: I0124 00:39:49.022476 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:49.025480 kubelet[2177]: E0124 00:39:49.025400 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-a6966cf543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:50.954204 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Jan 24 00:39:50.954254 systemd[1]: Reloading... Jan 24 00:39:51.112433 zram_generator::config[2494]: No configuration found. Jan 24 00:39:51.201920 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:51.273171 systemd[1]: Reloading finished in 318 ms. Jan 24 00:39:51.325547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:51.352594 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:39:51.352840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:51.358793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:51.545647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:51.546595 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:39:51.617424 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:51.617424 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:39:51.617424 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:51.617884 kubelet[2539]: I0124 00:39:51.617592 2539 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:39:51.626730 kubelet[2539]: I0124 00:39:51.626675 2539 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:39:51.626730 kubelet[2539]: I0124 00:39:51.626715 2539 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:39:51.629427 kubelet[2539]: I0124 00:39:51.627405 2539 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:39:51.630559 kubelet[2539]: I0124 00:39:51.630537 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:39:51.636310 kubelet[2539]: I0124 00:39:51.635229 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:39:51.641427 kubelet[2539]: E0124 00:39:51.640783 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:39:51.641427 kubelet[2539]: I0124 00:39:51.640814 2539 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:39:51.645010 kubelet[2539]: I0124 00:39:51.644963 2539 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:39:51.645524 kubelet[2539]: I0124 00:39:51.645465 2539 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:39:51.645771 kubelet[2539]: I0124 00:39:51.645516 2539 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a6966cf543","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:39:51.645885 kubelet[2539]: I0124 00:39:51.645783 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:39:51.645885 kubelet[2539]: I0124 00:39:51.645800 2539 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:39:51.645935 kubelet[2539]: I0124 00:39:51.645886 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:51.646372 kubelet[2539]: I0124 00:39:51.646140 2539 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:39:51.646372 kubelet[2539]: I0124 00:39:51.646181 2539 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:39:51.646372 kubelet[2539]: I0124 00:39:51.646207 2539 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:39:51.646372 kubelet[2539]: I0124 00:39:51.646224 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:39:51.651200 kubelet[2539]: I0124 00:39:51.651178 2539 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:39:51.651725 kubelet[2539]: I0124 00:39:51.651707 2539 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:39:51.652293 kubelet[2539]: I0124 00:39:51.652277 2539 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:39:51.652638 kubelet[2539]: I0124 00:39:51.652626 2539 server.go:1287] "Started kubelet" Jan 24 00:39:51.656818 kubelet[2539]: I0124 00:39:51.655393 2539 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:39:51.660418 kubelet[2539]: I0124 00:39:51.660364 2539 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:39:51.666032 kubelet[2539]: I0124 00:39:51.665857 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:39:51.667391 kubelet[2539]: I0124 00:39:51.666783 2539 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:39:51.667391 kubelet[2539]: I0124 00:39:51.666932 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:39:51.670368 kubelet[2539]: I0124 00:39:51.669664 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:39:51.671530 kubelet[2539]: I0124 00:39:51.671518 2539 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:39:51.672565 kubelet[2539]: I0124 00:39:51.672531 2539 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:39:51.672738 kubelet[2539]: I0124 00:39:51.672729 2539 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:39:51.675778 kubelet[2539]: I0124 00:39:51.675762 2539 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:39:51.676196 kubelet[2539]: I0124 00:39:51.676180 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:39:51.677772 kubelet[2539]: I0124 00:39:51.677708 2539 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:39:51.681291 kubelet[2539]: E0124 00:39:51.681275 2539 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:39:51.686116 kubelet[2539]: I0124 00:39:51.686065 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:39:51.687265 kubelet[2539]: I0124 00:39:51.687230 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:39:51.687265 kubelet[2539]: I0124 00:39:51.687261 2539 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:39:51.687391 kubelet[2539]: I0124 00:39:51.687289 2539 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:39:51.687391 kubelet[2539]: I0124 00:39:51.687300 2539 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:39:51.687430 kubelet[2539]: E0124 00:39:51.687386 2539 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:39:51.738061 kubelet[2539]: I0124 00:39:51.738029 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738205 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738227 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738430 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738439 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738457 2539 policy_none.go:49] "None policy: Start" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738466 2539 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738475 2539 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:39:51.738958 kubelet[2539]: I0124 00:39:51.738561 2539 state_mem.go:75] "Updated machine memory state" Jan 24 00:39:51.745844 kubelet[2539]: I0124 00:39:51.745811 2539 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:39:51.746314 kubelet[2539]: I0124 00:39:51.746301 2539 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:39:51.746437 kubelet[2539]: I0124 00:39:51.746413 2539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:39:51.747094 kubelet[2539]: I0124 00:39:51.747085 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:39:51.747310 kubelet[2539]: E0124 00:39:51.747300 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:39:51.788582 kubelet[2539]: I0124 00:39:51.788544 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.789131 kubelet[2539]: I0124 00:39:51.788810 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.789131 kubelet[2539]: I0124 00:39:51.788633 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.861189 kubelet[2539]: I0124 00:39:51.860992 2539 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875072 kubelet[2539]: I0124 00:39:51.874696 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875072 kubelet[2539]: I0124 00:39:51.874749 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875072 kubelet[2539]: I0124 00:39:51.874791 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875072 kubelet[2539]: I0124 00:39:51.874824 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875072 kubelet[2539]: I0124 00:39:51.874850 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6366daadf73f8b939916777fa13e28a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" (UID: \"c6366daadf73f8b939916777fa13e28a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875454 kubelet[2539]: I0124 00:39:51.874875 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875454 kubelet[2539]: I0124 00:39:51.874903 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875454 kubelet[2539]: I0124 00:39:51.874928 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b44fae2bc54f1263d9840361e52890bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a6966cf543\" (UID: \"b44fae2bc54f1263d9840361e52890bd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.875454 kubelet[2539]: I0124 00:39:51.874953 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f698071c0c463875930b7335a05026c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a6966cf543\" (UID: \"3f698071c0c463875930b7335a05026c\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.881869 kubelet[2539]: I0124 00:39:51.880716 2539 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.881869 kubelet[2539]: I0124 00:39:51.880817 2539 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a6966cf543" Jan 24 00:39:51.958456 sudo[2572]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 00:39:51.959189 sudo[2572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 00:39:52.425105 sudo[2572]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:52.656853 kubelet[2539]: I0124 00:39:52.655284 2539 apiserver.go:52] "Watching apiserver" Jan 24 00:39:52.673502 kubelet[2539]: I0124 00:39:52.673436 2539 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:39:52.718493 kubelet[2539]: I0124 00:39:52.715887 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:52.725286 kubelet[2539]: E0124 00:39:52.725248 2539 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a6966cf543\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" Jan 24 00:39:52.757006 kubelet[2539]: I0124 00:39:52.756853 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a6966cf543" podStartSLOduration=1.756830047 podStartE2EDuration="1.756830047s" podCreationTimestamp="2026-01-24 00:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:52.755840598 +0000 UTC m=+1.203343657" watchObservedRunningTime="2026-01-24 00:39:52.756830047 +0000 UTC m=+1.204333107" Jan 24 00:39:52.784857 kubelet[2539]: I0124 00:39:52.784170 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a6966cf543" podStartSLOduration=1.7841498850000002 podStartE2EDuration="1.784149885s" podCreationTimestamp="2026-01-24 00:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:52.769980095 +0000 UTC m=+1.217483154" watchObservedRunningTime="2026-01-24 00:39:52.784149885 +0000 UTC m=+1.231652955" Jan 24 00:39:52.784857 kubelet[2539]: I0124 00:39:52.784294 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a6966cf543" podStartSLOduration=1.784288965 podStartE2EDuration="1.784288965s" podCreationTimestamp="2026-01-24 00:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:52.783592666 +0000 UTC m=+1.231095725" watchObservedRunningTime="2026-01-24 00:39:52.784288965 +0000 UTC m=+1.231792024" Jan 24 00:39:54.136614 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:54.260659 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:54.267088 systemd[1]: sshd@6-157.180.47.226:22-20.161.92.111:34498.service: Deactivated successfully. Jan 24 00:39:54.271215 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:39:54.271713 systemd[1]: session-7.scope: Consumed 4.517s CPU time, 152.9M memory peak, 0B memory swap peak. Jan 24 00:39:54.274444 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:39:54.276814 systemd-logind[1476]: Removed session 7. Jan 24 00:39:57.562579 kubelet[2539]: I0124 00:39:57.562415 2539 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:39:57.564008 containerd[1508]: time="2026-01-24T00:39:57.563653504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:39:57.564658 kubelet[2539]: I0124 00:39:57.563940 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:39:58.344249 systemd[1]: Created slice kubepods-besteffort-podfaa3311a_735b_4c9c_b994_abdd6782f4ff.slice - libcontainer container kubepods-besteffort-podfaa3311a_735b_4c9c_b994_abdd6782f4ff.slice. Jan 24 00:39:58.380782 systemd[1]: Created slice kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice - libcontainer container kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice. Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424307 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faa3311a-735b-4c9c-b994-abdd6782f4ff-lib-modules\") pod \"kube-proxy-wq8k7\" (UID: \"faa3311a-735b-4c9c-b994-abdd6782f4ff\") " pod="kube-system/kube-proxy-wq8k7" Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424397 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-kernel\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424427 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-hubble-tls\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424453 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/faa3311a-735b-4c9c-b994-abdd6782f4ff-kube-proxy\") pod \"kube-proxy-wq8k7\" (UID: \"faa3311a-735b-4c9c-b994-abdd6782f4ff\") " pod="kube-system/kube-proxy-wq8k7" Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424478 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faa3311a-735b-4c9c-b994-abdd6782f4ff-xtables-lock\") pod \"kube-proxy-wq8k7\" (UID: \"faa3311a-735b-4c9c-b994-abdd6782f4ff\") " pod="kube-system/kube-proxy-wq8k7" Jan 24 00:39:58.424624 kubelet[2539]: I0124 00:39:58.424502 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-bpf-maps\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425036 kubelet[2539]: I0124 00:39:58.424525 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-cgroup\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425372 kubelet[2539]: I0124 00:39:58.424551 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-xtables-lock\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425583 kubelet[2539]: I0124 00:39:58.425480 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04db0244-0ac2-4446-b5b4-c0636ec145b2-clustermesh-secrets\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425694 kubelet[2539]: I0124 00:39:58.425527 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-net\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425856 kubelet[2539]: I0124 00:39:58.425768 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-etc-cni-netd\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.425856 kubelet[2539]: I0124 00:39:58.425789 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-config-path\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426080 kubelet[2539]: I0124 00:39:58.425991 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789jl\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-kube-api-access-789jl\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426288 kubelet[2539]: I0124 00:39:58.426140 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-lib-modules\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426288 kubelet[2539]: I0124 00:39:58.426225 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-run\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426288 kubelet[2539]: I0124 00:39:58.426244 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-hostproc\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426288 kubelet[2539]: I0124 00:39:58.426261 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cni-path\") pod \"cilium-cxfd5\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " pod="kube-system/cilium-cxfd5" Jan 24 00:39:58.426609 kubelet[2539]: I0124 00:39:58.426526 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96j5\" (UniqueName: \"kubernetes.io/projected/faa3311a-735b-4c9c-b994-abdd6782f4ff-kube-api-access-v96j5\") pod \"kube-proxy-wq8k7\" (UID: \"faa3311a-735b-4c9c-b994-abdd6782f4ff\") " pod="kube-system/kube-proxy-wq8k7" Jan 24 00:39:58.615276 kubelet[2539]: I0124 00:39:58.615084 2539 status_manager.go:890] "Failed to get status for pod" podUID="0e23b35f-d14d-410c-a863-c95c14d20422" pod="kube-system/cilium-operator-6c4d7847fc-wjnk2" err="pods \"cilium-operator-6c4d7847fc-wjnk2\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" Jan 24 00:39:58.619471 systemd[1]: Created slice kubepods-besteffort-pod0e23b35f_d14d_410c_a863_c95c14d20422.slice - libcontainer container kubepods-besteffort-pod0e23b35f_d14d_410c_a863_c95c14d20422.slice. Jan 24 00:39:58.627680 kubelet[2539]: I0124 00:39:58.627621 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224zd\" (UniqueName: \"kubernetes.io/projected/0e23b35f-d14d-410c-a863-c95c14d20422-kube-api-access-224zd\") pod \"cilium-operator-6c4d7847fc-wjnk2\" (UID: \"0e23b35f-d14d-410c-a863-c95c14d20422\") " pod="kube-system/cilium-operator-6c4d7847fc-wjnk2" Jan 24 00:39:58.627680 kubelet[2539]: I0124 00:39:58.627679 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e23b35f-d14d-410c-a863-c95c14d20422-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wjnk2\" (UID: \"0e23b35f-d14d-410c-a863-c95c14d20422\") " pod="kube-system/cilium-operator-6c4d7847fc-wjnk2" Jan 24 00:39:58.654399 containerd[1508]: time="2026-01-24T00:39:58.654299323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq8k7,Uid:faa3311a-735b-4c9c-b994-abdd6782f4ff,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:58.694367 containerd[1508]: time="2026-01-24T00:39:58.693107578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:58.694367 containerd[1508]: time="2026-01-24T00:39:58.693195743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:58.694367 containerd[1508]: time="2026-01-24T00:39:58.693220567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.694367 containerd[1508]: time="2026-01-24T00:39:58.693720570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxfd5,Uid:04db0244-0ac2-4446-b5b4-c0636ec145b2,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:58.695868 containerd[1508]: time="2026-01-24T00:39:58.695600900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.731776 systemd[1]: Started cri-containerd-534f54d93fd8f17bdb7412dfbd39eea79eef217970c8ee7a3c54258c5897343b.scope - libcontainer container 534f54d93fd8f17bdb7412dfbd39eea79eef217970c8ee7a3c54258c5897343b. Jan 24 00:39:58.761634 containerd[1508]: time="2026-01-24T00:39:58.761501955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:58.761634 containerd[1508]: time="2026-01-24T00:39:58.761634343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:58.762101 containerd[1508]: time="2026-01-24T00:39:58.761671999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.762101 containerd[1508]: time="2026-01-24T00:39:58.761875848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.781435 containerd[1508]: time="2026-01-24T00:39:58.780811406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq8k7,Uid:faa3311a-735b-4c9c-b994-abdd6782f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"534f54d93fd8f17bdb7412dfbd39eea79eef217970c8ee7a3c54258c5897343b\"" Jan 24 00:39:58.784074 containerd[1508]: time="2026-01-24T00:39:58.784015657Z" level=info msg="CreateContainer within sandbox \"534f54d93fd8f17bdb7412dfbd39eea79eef217970c8ee7a3c54258c5897343b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:39:58.794586 systemd[1]: Started cri-containerd-01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f.scope - libcontainer container 01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f. Jan 24 00:39:58.809088 containerd[1508]: time="2026-01-24T00:39:58.809016123Z" level=info msg="CreateContainer within sandbox \"534f54d93fd8f17bdb7412dfbd39eea79eef217970c8ee7a3c54258c5897343b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78605daaee8867136bc1be6e8706ac1ca52877b9cbf6f452636f05dc1b2106ec\"" Jan 24 00:39:58.810606 containerd[1508]: time="2026-01-24T00:39:58.810526835Z" level=info msg="StartContainer for \"78605daaee8867136bc1be6e8706ac1ca52877b9cbf6f452636f05dc1b2106ec\"" Jan 24 00:39:58.832070 containerd[1508]: time="2026-01-24T00:39:58.832020238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxfd5,Uid:04db0244-0ac2-4446-b5b4-c0636ec145b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\"" Jan 24 00:39:58.842633 containerd[1508]: time="2026-01-24T00:39:58.842588572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:39:58.848437 systemd[1]: Started cri-containerd-78605daaee8867136bc1be6e8706ac1ca52877b9cbf6f452636f05dc1b2106ec.scope - libcontainer container 78605daaee8867136bc1be6e8706ac1ca52877b9cbf6f452636f05dc1b2106ec. Jan 24 00:39:58.878166 containerd[1508]: time="2026-01-24T00:39:58.877980412Z" level=info msg="StartContainer for \"78605daaee8867136bc1be6e8706ac1ca52877b9cbf6f452636f05dc1b2106ec\" returns successfully" Jan 24 00:39:58.927986 containerd[1508]: time="2026-01-24T00:39:58.927911418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wjnk2,Uid:0e23b35f-d14d-410c-a863-c95c14d20422,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:58.954409 containerd[1508]: time="2026-01-24T00:39:58.952967869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:58.954409 containerd[1508]: time="2026-01-24T00:39:58.953027917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:58.954409 containerd[1508]: time="2026-01-24T00:39:58.953041900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.954409 containerd[1508]: time="2026-01-24T00:39:58.953105972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:58.980604 systemd[1]: Started cri-containerd-8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e.scope - libcontainer container 8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e. Jan 24 00:39:59.025610 containerd[1508]: time="2026-01-24T00:39:59.025561281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wjnk2,Uid:0e23b35f-d14d-410c-a863-c95c14d20422,Namespace:kube-system,Attempt:0,} returns sandbox id \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\"" Jan 24 00:39:59.223915 update_engine[1484]: I20260124 00:39:59.223295 1484 update_attempter.cc:509] Updating boot flags... Jan 24 00:39:59.323024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2857) Jan 24 00:39:59.413381 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2864) Jan 24 00:40:01.919692 kubelet[2539]: I0124 00:40:01.919294 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wq8k7" podStartSLOduration=3.9192774139999997 podStartE2EDuration="3.919277414s" podCreationTimestamp="2026-01-24 00:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:59.777935885 +0000 UTC m=+8.225438954" watchObservedRunningTime="2026-01-24 00:40:01.919277414 +0000 UTC m=+10.366780433" Jan 24 00:40:02.824803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857666009.mount: Deactivated successfully. Jan 24 00:40:04.133550 containerd[1508]: time="2026-01-24T00:40:04.133489310Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:40:04.134522 containerd[1508]: time="2026-01-24T00:40:04.134473131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:40:04.135471 containerd[1508]: time="2026-01-24T00:40:04.135457221Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:40:04.136971 containerd[1508]: time="2026-01-24T00:40:04.136599317Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.293387805s" Jan 24 00:40:04.136971 containerd[1508]: time="2026-01-24T00:40:04.136624042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:40:04.137968 containerd[1508]: time="2026-01-24T00:40:04.137834421Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:40:04.139014 containerd[1508]: time="2026-01-24T00:40:04.138995759Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:40:04.149075 containerd[1508]: time="2026-01-24T00:40:04.149046390Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\"" Jan 24 00:40:04.151406 containerd[1508]: time="2026-01-24T00:40:04.149779096Z" level=info msg="StartContainer for \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\"" Jan 24 00:40:04.176426 systemd[1]: Started cri-containerd-1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946.scope - libcontainer container 1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946. Jan 24 00:40:04.197080 containerd[1508]: time="2026-01-24T00:40:04.197047637Z" level=info msg="StartContainer for \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\" returns successfully" Jan 24 00:40:04.207407 systemd[1]: cri-containerd-1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946.scope: Deactivated successfully. Jan 24 00:40:04.372916 containerd[1508]: time="2026-01-24T00:40:04.372118367Z" level=info msg="shim disconnected" id=1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946 namespace=k8s.io Jan 24 00:40:04.372916 containerd[1508]: time="2026-01-24T00:40:04.372203620Z" level=warning msg="cleaning up after shim disconnected" id=1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946 namespace=k8s.io Jan 24 00:40:04.373530 containerd[1508]: time="2026-01-24T00:40:04.372219469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:04.399143 containerd[1508]: time="2026-01-24T00:40:04.398964512Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:40:04.763360 containerd[1508]: time="2026-01-24T00:40:04.763029694Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:40:04.783373 containerd[1508]: time="2026-01-24T00:40:04.783277525Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\"" Jan 24 00:40:04.787601 containerd[1508]: time="2026-01-24T00:40:04.787422562Z" level=info msg="StartContainer for \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\"" Jan 24 00:40:04.835596 systemd[1]: Started cri-containerd-5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98.scope - libcontainer container 5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98. Jan 24 00:40:04.887720 containerd[1508]: time="2026-01-24T00:40:04.886886829Z" level=info msg="StartContainer for \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\" returns successfully" Jan 24 00:40:04.914662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:40:04.915254 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:40:04.915583 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:40:04.925855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:40:04.926537 systemd[1]: cri-containerd-5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98.scope: Deactivated successfully. Jan 24 00:40:04.967758 containerd[1508]: time="2026-01-24T00:40:04.967671621Z" level=info msg="shim disconnected" id=5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98 namespace=k8s.io Jan 24 00:40:04.967758 containerd[1508]: time="2026-01-24T00:40:04.967746847Z" level=warning msg="cleaning up after shim disconnected" id=5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98 namespace=k8s.io Jan 24 00:40:04.967758 containerd[1508]: time="2026-01-24T00:40:04.967763527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:04.977571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:40:05.149092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946-rootfs.mount: Deactivated successfully. Jan 24 00:40:05.730725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755401171.mount: Deactivated successfully. Jan 24 00:40:05.776042 containerd[1508]: time="2026-01-24T00:40:05.774712237Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:40:05.835623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457548007.mount: Deactivated successfully. Jan 24 00:40:05.841110 containerd[1508]: time="2026-01-24T00:40:05.841034401Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\"" Jan 24 00:40:05.842677 containerd[1508]: time="2026-01-24T00:40:05.841901121Z" level=info msg="StartContainer for \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\"" Jan 24 00:40:05.898452 systemd[1]: Started cri-containerd-37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30.scope - libcontainer container 37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30. Jan 24 00:40:05.945581 containerd[1508]: time="2026-01-24T00:40:05.945286385Z" level=info msg="StartContainer for \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\" returns successfully" Jan 24 00:40:05.950009 systemd[1]: cri-containerd-37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30.scope: Deactivated successfully. Jan 24 00:40:06.001635 containerd[1508]: time="2026-01-24T00:40:06.001490283Z" level=info msg="shim disconnected" id=37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30 namespace=k8s.io Jan 24 00:40:06.001635 containerd[1508]: time="2026-01-24T00:40:06.001561982Z" level=warning msg="cleaning up after shim disconnected" id=37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30 namespace=k8s.io Jan 24 00:40:06.001635 containerd[1508]: time="2026-01-24T00:40:06.001578020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:06.304886 containerd[1508]: time="2026-01-24T00:40:06.304751301Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:40:06.305994 containerd[1508]: time="2026-01-24T00:40:06.305962260Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:40:06.306946 containerd[1508]: time="2026-01-24T00:40:06.306923214Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:40:06.307848 containerd[1508]: time="2026-01-24T00:40:06.307826446Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.169972884s" Jan 24 00:40:06.307909 containerd[1508]: time="2026-01-24T00:40:06.307898235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:40:06.310343 containerd[1508]: time="2026-01-24T00:40:06.310307107Z" level=info msg="CreateContainer within sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:40:06.321391 containerd[1508]: time="2026-01-24T00:40:06.321279886Z" level=info msg="CreateContainer within sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\"" Jan 24 00:40:06.322375 containerd[1508]: time="2026-01-24T00:40:06.321782370Z" level=info msg="StartContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\"" Jan 24 00:40:06.349449 systemd[1]: Started cri-containerd-78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a.scope - libcontainer container 78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a. Jan 24 00:40:06.372538 containerd[1508]: time="2026-01-24T00:40:06.372494960Z" level=info msg="StartContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" returns successfully" Jan 24 00:40:06.778660 containerd[1508]: time="2026-01-24T00:40:06.778542790Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:40:06.794359 containerd[1508]: time="2026-01-24T00:40:06.792206505Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\"" Jan 24 00:40:06.794359 containerd[1508]: time="2026-01-24T00:40:06.792804490Z" level=info msg="StartContainer for \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\"" Jan 24 00:40:06.827559 systemd[1]: Started cri-containerd-a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70.scope - libcontainer container a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70. Jan 24 00:40:06.858946 systemd[1]: cri-containerd-a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70.scope: Deactivated successfully. Jan 24 00:40:06.860618 containerd[1508]: time="2026-01-24T00:40:06.860440711Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice/cri-containerd-a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70.scope/memory.events\": no such file or directory" Jan 24 00:40:06.863780 containerd[1508]: time="2026-01-24T00:40:06.863743751Z" level=info msg="StartContainer for \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\" returns successfully" Jan 24 00:40:06.906548 containerd[1508]: time="2026-01-24T00:40:06.906424165Z" level=info msg="shim disconnected" id=a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70 namespace=k8s.io Jan 24 00:40:06.908342 containerd[1508]: time="2026-01-24T00:40:06.906902656Z" level=warning msg="cleaning up after shim disconnected" id=a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70 namespace=k8s.io Jan 24 00:40:06.908342 containerd[1508]: time="2026-01-24T00:40:06.906916884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:06.988207 kubelet[2539]: I0124 00:40:06.988152 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wjnk2" podStartSLOduration=1.706439777 podStartE2EDuration="8.988135224s" podCreationTimestamp="2026-01-24 00:39:58 +0000 UTC" firstStartedPulling="2026-01-24 00:39:59.026707282 +0000 UTC m=+7.474210311" lastFinishedPulling="2026-01-24 00:40:06.308402729 +0000 UTC m=+14.755905758" observedRunningTime="2026-01-24 00:40:06.849215596 +0000 UTC m=+15.296718665" watchObservedRunningTime="2026-01-24 00:40:06.988135224 +0000 UTC m=+15.435638243" Jan 24 00:40:07.787935 containerd[1508]: time="2026-01-24T00:40:07.787862080Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:40:07.821614 containerd[1508]: time="2026-01-24T00:40:07.818997415Z" level=info msg="CreateContainer within sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\"" Jan 24 00:40:07.822703 containerd[1508]: time="2026-01-24T00:40:07.822657121Z" level=info msg="StartContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\"" Jan 24 00:40:07.896512 systemd[1]: Started cri-containerd-89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c.scope - libcontainer container 89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c. Jan 24 00:40:07.954899 containerd[1508]: time="2026-01-24T00:40:07.954662885Z" level=info msg="StartContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" returns successfully" Jan 24 00:40:08.063947 kubelet[2539]: I0124 00:40:08.063681 2539 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:40:08.086954 kubelet[2539]: I0124 00:40:08.086831 2539 status_manager.go:890] "Failed to get status for pod" podUID="cb8a82b1-110e-446e-8787-e908465d9d49" pod="kube-system/coredns-668d6bf9bc-2fm5p" err="pods \"coredns-668d6bf9bc-2fm5p\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" Jan 24 00:40:08.094311 systemd[1]: Created slice kubepods-burstable-podcb8a82b1_110e_446e_8787_e908465d9d49.slice - libcontainer container kubepods-burstable-podcb8a82b1_110e_446e_8787_e908465d9d49.slice. Jan 24 00:40:08.102644 systemd[1]: Created slice kubepods-burstable-podd12dffb0_f563_4a53_8591_52ec559191a9.slice - libcontainer container kubepods-burstable-podd12dffb0_f563_4a53_8591_52ec559191a9.slice. Jan 24 00:40:08.196671 kubelet[2539]: I0124 00:40:08.196616 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4l6q\" (UniqueName: \"kubernetes.io/projected/cb8a82b1-110e-446e-8787-e908465d9d49-kube-api-access-g4l6q\") pod \"coredns-668d6bf9bc-2fm5p\" (UID: \"cb8a82b1-110e-446e-8787-e908465d9d49\") " pod="kube-system/coredns-668d6bf9bc-2fm5p" Jan 24 00:40:08.196808 kubelet[2539]: I0124 00:40:08.196682 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12dffb0-f563-4a53-8591-52ec559191a9-config-volume\") pod \"coredns-668d6bf9bc-66fzd\" (UID: \"d12dffb0-f563-4a53-8591-52ec559191a9\") " pod="kube-system/coredns-668d6bf9bc-66fzd" Jan 24 00:40:08.196808 kubelet[2539]: I0124 00:40:08.196710 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a82b1-110e-446e-8787-e908465d9d49-config-volume\") pod \"coredns-668d6bf9bc-2fm5p\" (UID: \"cb8a82b1-110e-446e-8787-e908465d9d49\") " pod="kube-system/coredns-668d6bf9bc-2fm5p" Jan 24 00:40:08.196808 kubelet[2539]: I0124 00:40:08.196742 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/d12dffb0-f563-4a53-8591-52ec559191a9-kube-api-access-kr5b9\") pod \"coredns-668d6bf9bc-66fzd\" (UID: \"d12dffb0-f563-4a53-8591-52ec559191a9\") " pod="kube-system/coredns-668d6bf9bc-66fzd" Jan 24 00:40:08.402742 containerd[1508]: time="2026-01-24T00:40:08.402610807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fm5p,Uid:cb8a82b1-110e-446e-8787-e908465d9d49,Namespace:kube-system,Attempt:0,}" Jan 24 00:40:08.405433 containerd[1508]: time="2026-01-24T00:40:08.405387991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66fzd,Uid:d12dffb0-f563-4a53-8591-52ec559191a9,Namespace:kube-system,Attempt:0,}" Jan 24 00:40:10.259496 systemd-networkd[1403]: cilium_host: Link UP Jan 24 00:40:10.261699 systemd-networkd[1403]: cilium_net: Link UP Jan 24 00:40:10.261709 systemd-networkd[1403]: cilium_net: Gained carrier Jan 24 00:40:10.262178 systemd-networkd[1403]: cilium_host: Gained carrier Jan 24 00:40:10.380540 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jan 24 00:40:10.475923 systemd-networkd[1403]: cilium_vxlan: Link UP Jan 24 00:40:10.476105 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jan 24 00:40:10.508574 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jan 24 00:40:10.750866 kernel: NET: Registered PF_ALG protocol family Jan 24 00:40:11.667555 systemd-networkd[1403]: lxc_health: Link UP Jan 24 00:40:11.670770 systemd-networkd[1403]: lxc_health: Gained carrier Jan 24 00:40:11.968359 systemd-networkd[1403]: lxc90f20805f765: Link UP Jan 24 00:40:11.976722 kernel: eth0: renamed from tmpcc65a Jan 24 00:40:11.980010 systemd-networkd[1403]: lxc90f20805f765: Gained carrier Jan 24 00:40:11.991394 systemd-networkd[1403]: lxc96634bd07a72: Link UP Jan 24 00:40:11.997385 kernel: eth0: renamed from tmpd280a Jan 24 00:40:12.005547 systemd-networkd[1403]: lxc96634bd07a72: Gained carrier Jan 24 00:40:12.085977 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jan 24 00:40:12.726672 kubelet[2539]: I0124 00:40:12.726591 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxfd5" podStartSLOduration=9.423280839 podStartE2EDuration="14.726568948s" podCreationTimestamp="2026-01-24 00:39:58 +0000 UTC" firstStartedPulling="2026-01-24 00:39:58.833875993 +0000 UTC m=+7.281379042" lastFinishedPulling="2026-01-24 00:40:04.137164132 +0000 UTC m=+12.584667151" observedRunningTime="2026-01-24 00:40:08.807495208 +0000 UTC m=+17.254998267" watchObservedRunningTime="2026-01-24 00:40:12.726568948 +0000 UTC m=+21.174072007" Jan 24 00:40:13.044548 systemd-networkd[1403]: lxc96634bd07a72: Gained IPv6LL Jan 24 00:40:13.370509 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 24 00:40:13.941669 systemd-networkd[1403]: lxc90f20805f765: Gained IPv6LL Jan 24 00:40:14.600679 containerd[1508]: time="2026-01-24T00:40:14.600525075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:40:14.600679 containerd[1508]: time="2026-01-24T00:40:14.600564539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:40:14.600679 containerd[1508]: time="2026-01-24T00:40:14.600572113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:40:14.600679 containerd[1508]: time="2026-01-24T00:40:14.600629453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:40:14.631434 systemd[1]: Started cri-containerd-d280a0f5d7e65174890cc7a49ab92cde04724ea0579b1b0db843c83f79caa874.scope - libcontainer container d280a0f5d7e65174890cc7a49ab92cde04724ea0579b1b0db843c83f79caa874. Jan 24 00:40:14.679415 containerd[1508]: time="2026-01-24T00:40:14.679345169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:40:14.679748 containerd[1508]: time="2026-01-24T00:40:14.679593507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:40:14.679748 containerd[1508]: time="2026-01-24T00:40:14.679656949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:40:14.680763 containerd[1508]: time="2026-01-24T00:40:14.680734742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:40:14.707440 systemd[1]: Started cri-containerd-cc65a8484d8e7082202a5a9126c5506a0f19232135c13b8ce46d33b885336425.scope - libcontainer container cc65a8484d8e7082202a5a9126c5506a0f19232135c13b8ce46d33b885336425. Jan 24 00:40:14.720834 containerd[1508]: time="2026-01-24T00:40:14.720716516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66fzd,Uid:d12dffb0-f563-4a53-8591-52ec559191a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d280a0f5d7e65174890cc7a49ab92cde04724ea0579b1b0db843c83f79caa874\"" Jan 24 00:40:14.725503 containerd[1508]: time="2026-01-24T00:40:14.725468050Z" level=info msg="CreateContainer within sandbox \"d280a0f5d7e65174890cc7a49ab92cde04724ea0579b1b0db843c83f79caa874\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:40:14.747689 containerd[1508]: time="2026-01-24T00:40:14.747606703Z" level=info msg="CreateContainer within sandbox \"d280a0f5d7e65174890cc7a49ab92cde04724ea0579b1b0db843c83f79caa874\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1a97dcc92f820b553f7a30eadd5d2b077c8376677664406253ef52ab249bfba\"" Jan 24 00:40:14.749167 containerd[1508]: time="2026-01-24T00:40:14.748478201Z" level=info msg="StartContainer for \"b1a97dcc92f820b553f7a30eadd5d2b077c8376677664406253ef52ab249bfba\"" Jan 24 00:40:14.760117 containerd[1508]: time="2026-01-24T00:40:14.760077350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fm5p,Uid:cb8a82b1-110e-446e-8787-e908465d9d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc65a8484d8e7082202a5a9126c5506a0f19232135c13b8ce46d33b885336425\"" Jan 24 00:40:14.764261 containerd[1508]: time="2026-01-24T00:40:14.764226931Z" level=info msg="CreateContainer within sandbox \"cc65a8484d8e7082202a5a9126c5506a0f19232135c13b8ce46d33b885336425\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:40:14.775280 containerd[1508]: time="2026-01-24T00:40:14.775240002Z" level=info msg="CreateContainer within sandbox \"cc65a8484d8e7082202a5a9126c5506a0f19232135c13b8ce46d33b885336425\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"590e64df310c539d5c5dcb60cfafd333508e2fdf73615764819640503e2c9ce8\"" Jan 24 00:40:14.776393 containerd[1508]: time="2026-01-24T00:40:14.775916952Z" level=info msg="StartContainer for \"590e64df310c539d5c5dcb60cfafd333508e2fdf73615764819640503e2c9ce8\"" Jan 24 00:40:14.783514 systemd[1]: Started cri-containerd-b1a97dcc92f820b553f7a30eadd5d2b077c8376677664406253ef52ab249bfba.scope - libcontainer container b1a97dcc92f820b553f7a30eadd5d2b077c8376677664406253ef52ab249bfba. Jan 24 00:40:14.806417 systemd[1]: Started cri-containerd-590e64df310c539d5c5dcb60cfafd333508e2fdf73615764819640503e2c9ce8.scope - libcontainer container 590e64df310c539d5c5dcb60cfafd333508e2fdf73615764819640503e2c9ce8. Jan 24 00:40:14.816571 containerd[1508]: time="2026-01-24T00:40:14.816486834Z" level=info msg="StartContainer for \"b1a97dcc92f820b553f7a30eadd5d2b077c8376677664406253ef52ab249bfba\" returns successfully" Jan 24 00:40:14.835537 containerd[1508]: time="2026-01-24T00:40:14.835172364Z" level=info msg="StartContainer for \"590e64df310c539d5c5dcb60cfafd333508e2fdf73615764819640503e2c9ce8\" returns successfully" Jan 24 00:40:15.611409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762473068.mount: Deactivated successfully. Jan 24 00:40:15.836422 kubelet[2539]: I0124 00:40:15.836251 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2fm5p" podStartSLOduration=17.836229835 podStartE2EDuration="17.836229835s" podCreationTimestamp="2026-01-24 00:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:40:15.833413917 +0000 UTC m=+24.280916976" watchObservedRunningTime="2026-01-24 00:40:15.836229835 +0000 UTC m=+24.283732894" Jan 24 00:41:24.283219 systemd[1]: Started sshd@7-157.180.47.226:22-20.161.92.111:48164.service - OpenSSH per-connection server daemon (20.161.92.111:48164). Jan 24 00:41:25.047754 sshd[3933]: Accepted publickey for core from 20.161.92.111 port 48164 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:25.050690 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:25.058451 systemd-logind[1476]: New session 8 of user core. Jan 24 00:41:25.069553 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:41:25.728463 sshd[3933]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:25.735172 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:41:25.735936 systemd[1]: sshd@7-157.180.47.226:22-20.161.92.111:48164.service: Deactivated successfully. Jan 24 00:41:25.739902 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:41:25.742446 systemd-logind[1476]: Removed session 8. Jan 24 00:41:30.868793 systemd[1]: Started sshd@8-157.180.47.226:22-20.161.92.111:48176.service - OpenSSH per-connection server daemon (20.161.92.111:48176). Jan 24 00:41:31.643192 sshd[3949]: Accepted publickey for core from 20.161.92.111 port 48176 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:31.646104 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:31.655215 systemd-logind[1476]: New session 9 of user core. Jan 24 00:41:31.660582 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:41:32.291013 sshd[3949]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:32.297497 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:41:32.298593 systemd[1]: sshd@8-157.180.47.226:22-20.161.92.111:48176.service: Deactivated successfully. Jan 24 00:41:32.303126 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:41:32.305756 systemd-logind[1476]: Removed session 9. Jan 24 00:41:37.434787 systemd[1]: Started sshd@9-157.180.47.226:22-20.161.92.111:48250.service - OpenSSH per-connection server daemon (20.161.92.111:48250). Jan 24 00:41:38.203598 sshd[3963]: Accepted publickey for core from 20.161.92.111 port 48250 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:38.206541 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:38.214807 systemd-logind[1476]: New session 10 of user core. Jan 24 00:41:38.218568 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:41:38.828599 sshd[3963]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:38.833373 systemd[1]: sshd@9-157.180.47.226:22-20.161.92.111:48250.service: Deactivated successfully. Jan 24 00:41:38.836728 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:41:38.837978 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:41:38.839582 systemd-logind[1476]: Removed session 10. Jan 24 00:41:38.963673 systemd[1]: Started sshd@10-157.180.47.226:22-20.161.92.111:48252.service - OpenSSH per-connection server daemon (20.161.92.111:48252). Jan 24 00:41:39.741859 sshd[3977]: Accepted publickey for core from 20.161.92.111 port 48252 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:39.744663 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:39.755288 systemd-logind[1476]: New session 11 of user core. Jan 24 00:41:39.762571 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:41:40.421280 sshd[3977]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:40.426020 systemd[1]: sshd@10-157.180.47.226:22-20.161.92.111:48252.service: Deactivated successfully. Jan 24 00:41:40.429773 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:41:40.432175 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:41:40.433505 systemd-logind[1476]: Removed session 11. Jan 24 00:41:40.558706 systemd[1]: Started sshd@11-157.180.47.226:22-20.161.92.111:48264.service - OpenSSH per-connection server daemon (20.161.92.111:48264). Jan 24 00:41:41.332787 sshd[3988]: Accepted publickey for core from 20.161.92.111 port 48264 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:41.335085 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:41.344115 systemd-logind[1476]: New session 12 of user core. Jan 24 00:41:41.355621 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:41:41.959569 sshd[3988]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:41.964774 systemd[1]: sshd@11-157.180.47.226:22-20.161.92.111:48264.service: Deactivated successfully. Jan 24 00:41:41.969053 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:41:41.972445 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:41:41.974737 systemd-logind[1476]: Removed session 12. Jan 24 00:41:47.098772 systemd[1]: Started sshd@12-157.180.47.226:22-20.161.92.111:56378.service - OpenSSH per-connection server daemon (20.161.92.111:56378). Jan 24 00:41:47.871212 sshd[4001]: Accepted publickey for core from 20.161.92.111 port 56378 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:47.874147 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:47.881394 systemd-logind[1476]: New session 13 of user core. Jan 24 00:41:47.890643 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:41:48.500834 sshd[4001]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:48.505905 systemd[1]: sshd@12-157.180.47.226:22-20.161.92.111:56378.service: Deactivated successfully. Jan 24 00:41:48.509998 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:41:48.512641 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:41:48.514884 systemd-logind[1476]: Removed session 13. Jan 24 00:41:48.642059 systemd[1]: Started sshd@13-157.180.47.226:22-20.161.92.111:56394.service - OpenSSH per-connection server daemon (20.161.92.111:56394). Jan 24 00:41:49.413198 sshd[4014]: Accepted publickey for core from 20.161.92.111 port 56394 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:49.414449 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:49.423260 systemd-logind[1476]: New session 14 of user core. Jan 24 00:41:49.428666 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:41:50.088827 sshd[4014]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:50.095294 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:41:50.096465 systemd[1]: sshd@13-157.180.47.226:22-20.161.92.111:56394.service: Deactivated successfully. Jan 24 00:41:50.099958 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:41:50.101747 systemd-logind[1476]: Removed session 14. Jan 24 00:41:50.226731 systemd[1]: Started sshd@14-157.180.47.226:22-20.161.92.111:56402.service - OpenSSH per-connection server daemon (20.161.92.111:56402). Jan 24 00:41:50.991433 sshd[4025]: Accepted publickey for core from 20.161.92.111 port 56402 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:50.994249 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:51.002445 systemd-logind[1476]: New session 15 of user core. Jan 24 00:41:51.009593 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:41:52.053523 sshd[4025]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:52.060262 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:41:52.061036 systemd[1]: sshd@14-157.180.47.226:22-20.161.92.111:56402.service: Deactivated successfully. Jan 24 00:41:52.065673 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:41:52.067579 systemd-logind[1476]: Removed session 15. Jan 24 00:41:52.192764 systemd[1]: Started sshd@15-157.180.47.226:22-20.161.92.111:56406.service - OpenSSH per-connection server daemon (20.161.92.111:56406). Jan 24 00:41:52.963878 sshd[4045]: Accepted publickey for core from 20.161.92.111 port 56406 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:52.966666 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:52.974492 systemd-logind[1476]: New session 16 of user core. Jan 24 00:41:52.981576 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:41:53.741916 sshd[4045]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:53.749045 systemd[1]: sshd@15-157.180.47.226:22-20.161.92.111:56406.service: Deactivated successfully. Jan 24 00:41:53.754017 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:41:53.756741 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:41:53.758175 systemd-logind[1476]: Removed session 16. Jan 24 00:41:53.884783 systemd[1]: Started sshd@16-157.180.47.226:22-20.161.92.111:54906.service - OpenSSH per-connection server daemon (20.161.92.111:54906). Jan 24 00:41:54.643385 sshd[4056]: Accepted publickey for core from 20.161.92.111 port 54906 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:54.645655 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:54.653780 systemd-logind[1476]: New session 17 of user core. Jan 24 00:41:54.659559 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:41:55.239593 sshd[4056]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:55.246083 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:41:55.247618 systemd[1]: sshd@16-157.180.47.226:22-20.161.92.111:54906.service: Deactivated successfully. Jan 24 00:41:55.251624 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:41:55.253170 systemd-logind[1476]: Removed session 17. Jan 24 00:42:00.383841 systemd[1]: Started sshd@17-157.180.47.226:22-20.161.92.111:54922.service - OpenSSH per-connection server daemon (20.161.92.111:54922). Jan 24 00:42:01.154945 sshd[4073]: Accepted publickey for core from 20.161.92.111 port 54922 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:01.157973 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:01.167466 systemd-logind[1476]: New session 18 of user core. Jan 24 00:42:01.174581 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:42:01.784291 sshd[4073]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:01.791783 systemd[1]: sshd@17-157.180.47.226:22-20.161.92.111:54922.service: Deactivated successfully. Jan 24 00:42:01.796405 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:42:01.797839 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:42:01.799691 systemd-logind[1476]: Removed session 18. Jan 24 00:42:06.923821 systemd[1]: Started sshd@18-157.180.47.226:22-20.161.92.111:40710.service - OpenSSH per-connection server daemon (20.161.92.111:40710). Jan 24 00:42:07.698814 sshd[4086]: Accepted publickey for core from 20.161.92.111 port 40710 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:07.701914 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:07.709563 systemd-logind[1476]: New session 19 of user core. Jan 24 00:42:07.720437 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:42:08.319286 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:08.324061 systemd[1]: sshd@18-157.180.47.226:22-20.161.92.111:40710.service: Deactivated successfully. Jan 24 00:42:08.328424 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:42:08.331414 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:42:08.333369 systemd-logind[1476]: Removed session 19. Jan 24 00:42:13.459743 systemd[1]: Started sshd@19-157.180.47.226:22-20.161.92.111:33926.service - OpenSSH per-connection server daemon (20.161.92.111:33926). Jan 24 00:42:14.227090 sshd[4099]: Accepted publickey for core from 20.161.92.111 port 33926 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:14.229956 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:14.238296 systemd-logind[1476]: New session 20 of user core. Jan 24 00:42:14.245533 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:42:14.855579 sshd[4099]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:14.862843 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:42:14.864356 systemd[1]: sshd@19-157.180.47.226:22-20.161.92.111:33926.service: Deactivated successfully. Jan 24 00:42:14.869240 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:42:14.871210 systemd-logind[1476]: Removed session 20. Jan 24 00:42:14.993751 systemd[1]: Started sshd@20-157.180.47.226:22-20.161.92.111:33938.service - OpenSSH per-connection server daemon (20.161.92.111:33938). Jan 24 00:42:15.765134 sshd[4112]: Accepted publickey for core from 20.161.92.111 port 33938 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:15.768395 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:15.777471 systemd-logind[1476]: New session 21 of user core. Jan 24 00:42:15.785753 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:42:17.478914 kubelet[2539]: I0124 00:42:17.478779 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-66fzd" podStartSLOduration=139.478746252 podStartE2EDuration="2m19.478746252s" podCreationTimestamp="2026-01-24 00:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:40:15.873579599 +0000 UTC m=+24.321082658" watchObservedRunningTime="2026-01-24 00:42:17.478746252 +0000 UTC m=+145.926249321" Jan 24 00:42:17.521072 containerd[1508]: time="2026-01-24T00:42:17.520880391Z" level=info msg="StopContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" with timeout 30 (s)" Jan 24 00:42:17.529456 containerd[1508]: time="2026-01-24T00:42:17.524496765Z" level=info msg="Stop container \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" with signal terminated" Jan 24 00:42:17.536108 systemd[1]: run-containerd-runc-k8s.io-89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c-runc.lpR6Fs.mount: Deactivated successfully. Jan 24 00:42:17.554725 systemd[1]: cri-containerd-78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a.scope: Deactivated successfully. Jan 24 00:42:17.558369 containerd[1508]: time="2026-01-24T00:42:17.557626427Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:42:17.570073 containerd[1508]: time="2026-01-24T00:42:17.570052131Z" level=info msg="StopContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" with timeout 2 (s)" Jan 24 00:42:17.570428 containerd[1508]: time="2026-01-24T00:42:17.570390323Z" level=info msg="Stop container \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" with signal terminated" Jan 24 00:42:17.577141 systemd-networkd[1403]: lxc_health: Link DOWN Jan 24 00:42:17.577155 systemd-networkd[1403]: lxc_health: Lost carrier Jan 24 00:42:17.597165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a-rootfs.mount: Deactivated successfully. Jan 24 00:42:17.599741 systemd[1]: cri-containerd-89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c.scope: Deactivated successfully. Jan 24 00:42:17.600313 systemd[1]: cri-containerd-89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c.scope: Consumed 6.452s CPU time. Jan 24 00:42:17.617606 containerd[1508]: time="2026-01-24T00:42:17.617423480Z" level=info msg="shim disconnected" id=78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a namespace=k8s.io Jan 24 00:42:17.617606 containerd[1508]: time="2026-01-24T00:42:17.617534244Z" level=warning msg="cleaning up after shim disconnected" id=78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a namespace=k8s.io Jan 24 00:42:17.617606 containerd[1508]: time="2026-01-24T00:42:17.617542094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:17.633497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c-rootfs.mount: Deactivated successfully. Jan 24 00:42:17.634186 containerd[1508]: time="2026-01-24T00:42:17.634039297Z" level=info msg="shim disconnected" id=89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c namespace=k8s.io Jan 24 00:42:17.634186 containerd[1508]: time="2026-01-24T00:42:17.634097469Z" level=warning msg="cleaning up after shim disconnected" id=89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c namespace=k8s.io Jan 24 00:42:17.634186 containerd[1508]: time="2026-01-24T00:42:17.634104499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:17.642690 containerd[1508]: time="2026-01-24T00:42:17.642659142Z" level=info msg="StopContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" returns successfully" Jan 24 00:42:17.643533 containerd[1508]: time="2026-01-24T00:42:17.643446668Z" level=info msg="StopPodSandbox for \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\"" Jan 24 00:42:17.643533 containerd[1508]: time="2026-01-24T00:42:17.643469019Z" level=info msg="Container to stop \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.645660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e-shm.mount: Deactivated successfully. Jan 24 00:42:17.652106 systemd[1]: cri-containerd-8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e.scope: Deactivated successfully. Jan 24 00:42:17.669209 containerd[1508]: time="2026-01-24T00:42:17.669117216Z" level=info msg="StopContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" returns successfully" Jan 24 00:42:17.669865 containerd[1508]: time="2026-01-24T00:42:17.669837800Z" level=info msg="StopPodSandbox for \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\"" Jan 24 00:42:17.669897 containerd[1508]: time="2026-01-24T00:42:17.669879191Z" level=info msg="Container to stop \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.669926 containerd[1508]: time="2026-01-24T00:42:17.669895212Z" level=info msg="Container to stop \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.669926 containerd[1508]: time="2026-01-24T00:42:17.669909352Z" level=info msg="Container to stop \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.669964 containerd[1508]: time="2026-01-24T00:42:17.669922033Z" level=info msg="Container to stop \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.669964 containerd[1508]: time="2026-01-24T00:42:17.669934463Z" level=info msg="Container to stop \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:42:17.680919 systemd[1]: cri-containerd-01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f.scope: Deactivated successfully. Jan 24 00:42:17.708913 containerd[1508]: time="2026-01-24T00:42:17.707019041Z" level=info msg="shim disconnected" id=01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f namespace=k8s.io Jan 24 00:42:17.708913 containerd[1508]: time="2026-01-24T00:42:17.708868884Z" level=warning msg="cleaning up after shim disconnected" id=01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f namespace=k8s.io Jan 24 00:42:17.708913 containerd[1508]: time="2026-01-24T00:42:17.708877645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:17.709213 containerd[1508]: time="2026-01-24T00:42:17.708037875Z" level=info msg="shim disconnected" id=8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e namespace=k8s.io Jan 24 00:42:17.709213 containerd[1508]: time="2026-01-24T00:42:17.708966348Z" level=warning msg="cleaning up after shim disconnected" id=8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e namespace=k8s.io Jan 24 00:42:17.709213 containerd[1508]: time="2026-01-24T00:42:17.708972838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:17.720394 containerd[1508]: time="2026-01-24T00:42:17.720221811Z" level=info msg="TearDown network for sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" successfully" Jan 24 00:42:17.720394 containerd[1508]: time="2026-01-24T00:42:17.720245052Z" level=info msg="StopPodSandbox for \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" returns successfully" Jan 24 00:42:17.721011 containerd[1508]: time="2026-01-24T00:42:17.720964168Z" level=info msg="TearDown network for sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" successfully" Jan 24 00:42:17.721011 containerd[1508]: time="2026-01-24T00:42:17.720977328Z" level=info msg="StopPodSandbox for \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" returns successfully" Jan 24 00:42:17.880740 kubelet[2539]: I0124 00:42:17.880639 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-kernel\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.880740 kubelet[2539]: I0124 00:42:17.880704 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-bpf-maps\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.880740 kubelet[2539]: I0124 00:42:17.880736 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-xtables-lock\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880772 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-config-path\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880801 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-hostproc\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880832 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-hubble-tls\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880856 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-etc-cni-netd\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880881 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-cgroup\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881106 kubelet[2539]: I0124 00:42:17.880907 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04db0244-0ac2-4446-b5b4-c0636ec145b2-clustermesh-secrets\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.880930 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-run\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.880953 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cni-path\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.880992 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-net\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.881021 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e23b35f-d14d-410c-a863-c95c14d20422-cilium-config-path\") pod \"0e23b35f-d14d-410c-a863-c95c14d20422\" (UID: \"0e23b35f-d14d-410c-a863-c95c14d20422\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.881049 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789jl\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-kube-api-access-789jl\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881438 kubelet[2539]: I0124 00:42:17.881081 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-lib-modules\") pod \"04db0244-0ac2-4446-b5b4-c0636ec145b2\" (UID: \"04db0244-0ac2-4446-b5b4-c0636ec145b2\") " Jan 24 00:42:17.881703 kubelet[2539]: I0124 00:42:17.881128 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224zd\" (UniqueName: \"kubernetes.io/projected/0e23b35f-d14d-410c-a863-c95c14d20422-kube-api-access-224zd\") pod \"0e23b35f-d14d-410c-a863-c95c14d20422\" (UID: \"0e23b35f-d14d-410c-a863-c95c14d20422\") " Jan 24 00:42:17.883352 kubelet[2539]: I0124 00:42:17.881823 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.883352 kubelet[2539]: I0124 00:42:17.881902 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.883352 kubelet[2539]: I0124 00:42:17.881933 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.883352 kubelet[2539]: I0124 00:42:17.881961 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.887493 kubelet[2539]: I0124 00:42:17.887459 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.887635 kubelet[2539]: I0124 00:42:17.887613 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cni-path" (OuterVolumeSpecName: "cni-path") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.887736 kubelet[2539]: I0124 00:42:17.887716 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.887828 kubelet[2539]: I0124 00:42:17.887602 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-hostproc" (OuterVolumeSpecName: "hostproc") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.888583 kubelet[2539]: I0124 00:42:17.888523 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.893526 kubelet[2539]: I0124 00:42:17.893481 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:42:17.895859 kubelet[2539]: I0124 00:42:17.895777 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e23b35f-d14d-410c-a863-c95c14d20422-kube-api-access-224zd" (OuterVolumeSpecName: "kube-api-access-224zd") pod "0e23b35f-d14d-410c-a863-c95c14d20422" (UID: "0e23b35f-d14d-410c-a863-c95c14d20422"). InnerVolumeSpecName "kube-api-access-224zd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:42:17.898173 kubelet[2539]: I0124 00:42:17.898074 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:42:17.901044 kubelet[2539]: I0124 00:42:17.900994 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:42:17.902586 kubelet[2539]: I0124 00:42:17.902544 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-kube-api-access-789jl" (OuterVolumeSpecName: "kube-api-access-789jl") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "kube-api-access-789jl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:42:17.902784 kubelet[2539]: I0124 00:42:17.902749 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db0244-0ac2-4446-b5b4-c0636ec145b2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04db0244-0ac2-4446-b5b4-c0636ec145b2" (UID: "04db0244-0ac2-4446-b5b4-c0636ec145b2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:42:17.904755 kubelet[2539]: I0124 00:42:17.904694 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e23b35f-d14d-410c-a863-c95c14d20422-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e23b35f-d14d-410c-a863-c95c14d20422" (UID: "0e23b35f-d14d-410c-a863-c95c14d20422"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:42:17.982244 kubelet[2539]: I0124 00:42:17.982179 2539 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-hubble-tls\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982244 kubelet[2539]: I0124 00:42:17.982223 2539 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-etc-cni-netd\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982244 kubelet[2539]: I0124 00:42:17.982243 2539 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cni-path\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982244 kubelet[2539]: I0124 00:42:17.982257 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-cgroup\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982271 2539 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04db0244-0ac2-4446-b5b4-c0636ec145b2-clustermesh-secrets\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982288 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-run\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982303 2539 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-net\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982388 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e23b35f-d14d-410c-a863-c95c14d20422-cilium-config-path\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982405 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-789jl\" (UniqueName: \"kubernetes.io/projected/04db0244-0ac2-4446-b5b4-c0636ec145b2-kube-api-access-789jl\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982423 2539 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-lib-modules\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982437 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-224zd\" (UniqueName: \"kubernetes.io/projected/0e23b35f-d14d-410c-a863-c95c14d20422-kube-api-access-224zd\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982566 kubelet[2539]: I0124 00:42:17.982452 2539 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982944 kubelet[2539]: I0124 00:42:17.982466 2539 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-bpf-maps\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982944 kubelet[2539]: I0124 00:42:17.982480 2539 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-xtables-lock\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982944 kubelet[2539]: I0124 00:42:17.982496 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04db0244-0ac2-4446-b5b4-c0636ec145b2-cilium-config-path\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:17.982944 kubelet[2539]: I0124 00:42:17.982509 2539 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04db0244-0ac2-4446-b5b4-c0636ec145b2-hostproc\") on node \"ci-4081-3-6-n-a6966cf543\" DevicePath \"\"" Jan 24 00:42:18.095908 kubelet[2539]: I0124 00:42:18.095844 2539 scope.go:117] "RemoveContainer" containerID="78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a" Jan 24 00:42:18.099673 containerd[1508]: time="2026-01-24T00:42:18.099432163Z" level=info msg="RemoveContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\"" Jan 24 00:42:18.111533 containerd[1508]: time="2026-01-24T00:42:18.111137060Z" level=info msg="RemoveContainer for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" returns successfully" Jan 24 00:42:18.111650 kubelet[2539]: I0124 00:42:18.111608 2539 scope.go:117] "RemoveContainer" containerID="78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a" Jan 24 00:42:18.112803 containerd[1508]: time="2026-01-24T00:42:18.112585300Z" level=error msg="ContainerStatus for \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\": not found" Jan 24 00:42:18.113800 kubelet[2539]: E0124 00:42:18.113395 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\": not found" containerID="78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a" Jan 24 00:42:18.114115 kubelet[2539]: I0124 00:42:18.113720 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a"} err="failed to get container status \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\": rpc error: code = NotFound desc = an error occurred when try to find container \"78715632cb28fd06ae92123ba9d1950e9c956356443011a04f2ee3eeb0f7147a\": not found" Jan 24 00:42:18.114375 kubelet[2539]: I0124 00:42:18.114238 2539 scope.go:117] "RemoveContainer" containerID="89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c" Jan 24 00:42:18.117814 systemd[1]: Removed slice kubepods-besteffort-pod0e23b35f_d14d_410c_a863_c95c14d20422.slice - libcontainer container kubepods-besteffort-pod0e23b35f_d14d_410c_a863_c95c14d20422.slice. Jan 24 00:42:18.122219 containerd[1508]: time="2026-01-24T00:42:18.121935615Z" level=info msg="RemoveContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\"" Jan 24 00:42:18.130080 containerd[1508]: time="2026-01-24T00:42:18.130015195Z" level=info msg="RemoveContainer for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" returns successfully" Jan 24 00:42:18.130500 kubelet[2539]: I0124 00:42:18.130450 2539 scope.go:117] "RemoveContainer" containerID="a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70" Jan 24 00:42:18.136240 containerd[1508]: time="2026-01-24T00:42:18.134981958Z" level=info msg="RemoveContainer for \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\"" Jan 24 00:42:18.140118 systemd[1]: Removed slice kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice - libcontainer container kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice. Jan 24 00:42:18.143199 containerd[1508]: time="2026-01-24T00:42:18.141503045Z" level=info msg="RemoveContainer for \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\" returns successfully" Jan 24 00:42:18.140280 systemd[1]: kubepods-burstable-pod04db0244_0ac2_4446_b5b4_c0636ec145b2.slice: Consumed 6.554s CPU time. Jan 24 00:42:18.144276 kubelet[2539]: I0124 00:42:18.144234 2539 scope.go:117] "RemoveContainer" containerID="37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30" Jan 24 00:42:18.146139 containerd[1508]: time="2026-01-24T00:42:18.146055323Z" level=info msg="RemoveContainer for \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\"" Jan 24 00:42:18.152924 containerd[1508]: time="2026-01-24T00:42:18.152842839Z" level=info msg="RemoveContainer for \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\" returns successfully" Jan 24 00:42:18.153474 kubelet[2539]: I0124 00:42:18.153423 2539 scope.go:117] "RemoveContainer" containerID="5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98" Jan 24 00:42:18.159391 containerd[1508]: time="2026-01-24T00:42:18.159252861Z" level=info msg="RemoveContainer for \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\"" Jan 24 00:42:18.170828 containerd[1508]: time="2026-01-24T00:42:18.170784011Z" level=info msg="RemoveContainer for \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\" returns successfully" Jan 24 00:42:18.171602 kubelet[2539]: I0124 00:42:18.171495 2539 scope.go:117] "RemoveContainer" containerID="1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946" Jan 24 00:42:18.175823 containerd[1508]: time="2026-01-24T00:42:18.175771754Z" level=info msg="RemoveContainer for \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\"" Jan 24 00:42:18.181359 containerd[1508]: time="2026-01-24T00:42:18.181295996Z" level=info msg="RemoveContainer for \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\" returns successfully" Jan 24 00:42:18.181601 kubelet[2539]: I0124 00:42:18.181563 2539 scope.go:117] "RemoveContainer" containerID="89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c" Jan 24 00:42:18.182238 containerd[1508]: time="2026-01-24T00:42:18.181839895Z" level=error msg="ContainerStatus for \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\": not found" Jan 24 00:42:18.182356 kubelet[2539]: E0124 00:42:18.182046 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\": not found" containerID="89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c" Jan 24 00:42:18.182356 kubelet[2539]: I0124 00:42:18.182083 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c"} err="failed to get container status \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\": rpc error: code = NotFound desc = an error occurred when try to find container \"89e1c0c4cfbca4ad09af2627ba4ba0afb13197be5efb767865d4d8988fec241c\": not found" Jan 24 00:42:18.182356 kubelet[2539]: I0124 00:42:18.182110 2539 scope.go:117] "RemoveContainer" containerID="a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70" Jan 24 00:42:18.182837 containerd[1508]: time="2026-01-24T00:42:18.182693755Z" level=error msg="ContainerStatus for \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\": not found" Jan 24 00:42:18.183015 kubelet[2539]: E0124 00:42:18.182984 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\": not found" containerID="a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70" Jan 24 00:42:18.183072 kubelet[2539]: I0124 00:42:18.183016 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70"} err="failed to get container status \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\": rpc error: code = NotFound desc = an error occurred when try to find container \"a048f5f2174260e9954a4c35cb0b2c2d5a1d64fea30752608a7d47224cc4ca70\": not found" Jan 24 00:42:18.183072 kubelet[2539]: I0124 00:42:18.183045 2539 scope.go:117] "RemoveContainer" containerID="37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30" Jan 24 00:42:18.183407 containerd[1508]: time="2026-01-24T00:42:18.183294245Z" level=error msg="ContainerStatus for \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\": not found" Jan 24 00:42:18.183624 kubelet[2539]: E0124 00:42:18.183559 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\": not found" containerID="37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30" Jan 24 00:42:18.183624 kubelet[2539]: I0124 00:42:18.183595 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30"} err="failed to get container status \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\": rpc error: code = NotFound desc = an error occurred when try to find container \"37a2ba8cee157c2f73dc7b405b917374fcd8fd65079a71f4753e3602a036ec30\": not found" Jan 24 00:42:18.183624 kubelet[2539]: I0124 00:42:18.183617 2539 scope.go:117] "RemoveContainer" containerID="5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98" Jan 24 00:42:18.183923 containerd[1508]: time="2026-01-24T00:42:18.183829934Z" level=error msg="ContainerStatus for \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\": not found" Jan 24 00:42:18.184113 kubelet[2539]: E0124 00:42:18.184025 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\": not found" containerID="5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98" Jan 24 00:42:18.184196 kubelet[2539]: I0124 00:42:18.184105 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98"} err="failed to get container status \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f64c978e00c18df57dfa36b1f5907e005c7060d967b9ef738f2d4dc3efdda98\": not found" Jan 24 00:42:18.184196 kubelet[2539]: I0124 00:42:18.184127 2539 scope.go:117] "RemoveContainer" containerID="1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946" Jan 24 00:42:18.184413 containerd[1508]: time="2026-01-24T00:42:18.184364213Z" level=error msg="ContainerStatus for \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\": not found" Jan 24 00:42:18.184546 kubelet[2539]: E0124 00:42:18.184504 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\": not found" containerID="1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946" Jan 24 00:42:18.184675 kubelet[2539]: I0124 00:42:18.184542 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946"} err="failed to get container status \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bfc526ca3ea0b4c3d4f4238baf23ddb1d4bfcebc597e81316c43f85c86e0946\": not found" Jan 24 00:42:18.520616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e-rootfs.mount: Deactivated successfully. Jan 24 00:42:18.520864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f-rootfs.mount: Deactivated successfully. Jan 24 00:42:18.521009 systemd[1]: var-lib-kubelet-pods-0e23b35f\x2dd14d\x2d410c\x2da863\x2dc95c14d20422-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d224zd.mount: Deactivated successfully. Jan 24 00:42:18.521142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f-shm.mount: Deactivated successfully. Jan 24 00:42:18.521295 systemd[1]: var-lib-kubelet-pods-04db0244\x2d0ac2\x2d4446\x2db5b4\x2dc0636ec145b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d789jl.mount: Deactivated successfully. Jan 24 00:42:18.521457 systemd[1]: var-lib-kubelet-pods-04db0244\x2d0ac2\x2d4446\x2db5b4\x2dc0636ec145b2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 00:42:18.521582 systemd[1]: var-lib-kubelet-pods-04db0244\x2d0ac2\x2d4446\x2db5b4\x2dc0636ec145b2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 00:42:19.558664 sshd[4112]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:19.565037 systemd[1]: sshd@20-157.180.47.226:22-20.161.92.111:33938.service: Deactivated successfully. Jan 24 00:42:19.569476 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:42:19.572721 systemd-logind[1476]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:42:19.575095 systemd-logind[1476]: Removed session 21. Jan 24 00:42:19.692007 kubelet[2539]: I0124 00:42:19.691773 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04db0244-0ac2-4446-b5b4-c0636ec145b2" path="/var/lib/kubelet/pods/04db0244-0ac2-4446-b5b4-c0636ec145b2/volumes" Jan 24 00:42:19.693814 kubelet[2539]: I0124 00:42:19.693616 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e23b35f-d14d-410c-a863-c95c14d20422" path="/var/lib/kubelet/pods/0e23b35f-d14d-410c-a863-c95c14d20422/volumes" Jan 24 00:42:19.699689 systemd[1]: Started sshd@21-157.180.47.226:22-20.161.92.111:33940.service - OpenSSH per-connection server daemon (20.161.92.111:33940). Jan 24 00:42:20.468531 sshd[4275]: Accepted publickey for core from 20.161.92.111 port 33940 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:20.471665 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:20.481171 systemd-logind[1476]: New session 22 of user core. Jan 24 00:42:20.486540 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:42:21.422838 kubelet[2539]: I0124 00:42:21.422750 2539 memory_manager.go:355] "RemoveStaleState removing state" podUID="04db0244-0ac2-4446-b5b4-c0636ec145b2" containerName="cilium-agent" Jan 24 00:42:21.422838 kubelet[2539]: I0124 00:42:21.422778 2539 memory_manager.go:355] "RemoveStaleState removing state" podUID="0e23b35f-d14d-410c-a863-c95c14d20422" containerName="cilium-operator" Jan 24 00:42:21.429320 kubelet[2539]: W0124 00:42:21.429217 2539 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-6-n-a6966cf543" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object Jan 24 00:42:21.429320 kubelet[2539]: E0124 00:42:21.429254 2539 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" logger="UnhandledError" Jan 24 00:42:21.429320 kubelet[2539]: I0124 00:42:21.429276 2539 status_manager.go:890] "Failed to get status for pod" podUID="8a9ff615-c0f3-49c6-8882-2fc147ad7d72" pod="kube-system/cilium-cxgpw" err="pods \"cilium-cxgpw\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" Jan 24 00:42:21.429320 kubelet[2539]: W0124 00:42:21.429309 2539 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-6-n-a6966cf543" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object Jan 24 00:42:21.429320 kubelet[2539]: E0124 00:42:21.429317 2539 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" logger="UnhandledError" Jan 24 00:42:21.429802 kubelet[2539]: W0124 00:42:21.429356 2539 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-6-n-a6966cf543" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object Jan 24 00:42:21.429802 kubelet[2539]: E0124 00:42:21.429362 2539 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" logger="UnhandledError" Jan 24 00:42:21.429802 kubelet[2539]: W0124 00:42:21.429378 2539 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-6-n-a6966cf543" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object Jan 24 00:42:21.429802 kubelet[2539]: E0124 00:42:21.429384 2539 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-a6966cf543\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a6966cf543' and this object" logger="UnhandledError" Jan 24 00:42:21.435207 systemd[1]: Created slice kubepods-burstable-pod8a9ff615_c0f3_49c6_8882_2fc147ad7d72.slice - libcontainer container kubepods-burstable-pod8a9ff615_c0f3_49c6_8882_2fc147ad7d72.slice. Jan 24 00:42:21.506740 kubelet[2539]: I0124 00:42:21.506637 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-lib-modules\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506740 kubelet[2539]: I0124 00:42:21.506724 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-ipsec-secrets\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506771 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-hubble-tls\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506812 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-hostproc\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506850 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cni-path\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506888 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-cgroup\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506925 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-xtables-lock\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.506991 kubelet[2539]: I0124 00:42:21.506960 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-clustermesh-secrets\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507001 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nk9k\" (UniqueName: \"kubernetes.io/projected/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-kube-api-access-4nk9k\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507042 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-run\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507080 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-bpf-maps\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507114 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-etc-cni-netd\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507151 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-config-path\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507227 kubelet[2539]: I0124 00:42:21.507187 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-host-proc-sys-net\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.507495 kubelet[2539]: I0124 00:42:21.507224 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-host-proc-sys-kernel\") pod \"cilium-cxgpw\" (UID: \"8a9ff615-c0f3-49c6-8882-2fc147ad7d72\") " pod="kube-system/cilium-cxgpw" Jan 24 00:42:21.558618 sshd[4275]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:21.568759 systemd[1]: sshd@21-157.180.47.226:22-20.161.92.111:33940.service: Deactivated successfully. Jan 24 00:42:21.573083 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:42:21.574681 systemd-logind[1476]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:42:21.577073 systemd-logind[1476]: Removed session 22. Jan 24 00:42:21.701559 systemd[1]: Started sshd@22-157.180.47.226:22-20.161.92.111:33950.service - OpenSSH per-connection server daemon (20.161.92.111:33950). Jan 24 00:42:21.790586 kubelet[2539]: E0124 00:42:21.790520 2539 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:42:22.478284 sshd[4288]: Accepted publickey for core from 20.161.92.111 port 33950 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:22.481730 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:22.490924 systemd-logind[1476]: New session 23 of user core. Jan 24 00:42:22.499573 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:42:22.610658 kubelet[2539]: E0124 00:42:22.610584 2539 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 24 00:42:22.610658 kubelet[2539]: E0124 00:42:22.610655 2539 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-cxgpw: failed to sync secret cache: timed out waiting for the condition Jan 24 00:42:22.611442 kubelet[2539]: E0124 00:42:22.610743 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-hubble-tls podName:8a9ff615-c0f3-49c6-8882-2fc147ad7d72 nodeName:}" failed. No retries permitted until 2026-01-24 00:42:23.110717356 +0000 UTC m=+151.558220415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-hubble-tls") pod "cilium-cxgpw" (UID: "8a9ff615-c0f3-49c6-8882-2fc147ad7d72") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:42:22.611442 kubelet[2539]: E0124 00:42:22.610582 2539 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 24 00:42:22.611442 kubelet[2539]: E0124 00:42:22.610806 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-ipsec-secrets podName:8a9ff615-c0f3-49c6-8882-2fc147ad7d72 nodeName:}" failed. No retries permitted until 2026-01-24 00:42:23.110794159 +0000 UTC m=+151.558297218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/8a9ff615-c0f3-49c6-8882-2fc147ad7d72-cilium-ipsec-secrets") pod "cilium-cxgpw" (UID: "8a9ff615-c0f3-49c6-8882-2fc147ad7d72") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:42:23.010385 sshd[4288]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:23.015501 systemd[1]: sshd@22-157.180.47.226:22-20.161.92.111:33950.service: Deactivated successfully. Jan 24 00:42:23.019840 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:42:23.023155 systemd-logind[1476]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:42:23.025810 systemd-logind[1476]: Removed session 23. Jan 24 00:42:23.161821 systemd[1]: Started sshd@23-157.180.47.226:22-20.161.92.111:41958.service - OpenSSH per-connection server daemon (20.161.92.111:41958). Jan 24 00:42:23.245537 containerd[1508]: time="2026-01-24T00:42:23.245440071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxgpw,Uid:8a9ff615-c0f3-49c6-8882-2fc147ad7d72,Namespace:kube-system,Attempt:0,}" Jan 24 00:42:23.286066 containerd[1508]: time="2026-01-24T00:42:23.284696295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:23.286066 containerd[1508]: time="2026-01-24T00:42:23.285044927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:23.286066 containerd[1508]: time="2026-01-24T00:42:23.285111150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:23.288361 containerd[1508]: time="2026-01-24T00:42:23.287862682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:23.329601 systemd[1]: Started cri-containerd-3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c.scope - libcontainer container 3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c. Jan 24 00:42:23.376871 containerd[1508]: time="2026-01-24T00:42:23.376820788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxgpw,Uid:8a9ff615-c0f3-49c6-8882-2fc147ad7d72,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\"" Jan 24 00:42:23.384613 containerd[1508]: time="2026-01-24T00:42:23.384415241Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:42:23.402465 containerd[1508]: time="2026-01-24T00:42:23.402243877Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008\"" Jan 24 00:42:23.404511 containerd[1508]: time="2026-01-24T00:42:23.403181981Z" level=info msg="StartContainer for \"632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008\"" Jan 24 00:42:23.450608 systemd[1]: Started cri-containerd-632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008.scope - libcontainer container 632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008. Jan 24 00:42:23.508164 containerd[1508]: time="2026-01-24T00:42:23.507993179Z" level=info msg="StartContainer for \"632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008\" returns successfully" Jan 24 00:42:23.528668 systemd[1]: cri-containerd-632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008.scope: Deactivated successfully. Jan 24 00:42:23.585281 containerd[1508]: time="2026-01-24T00:42:23.585127694Z" level=info msg="shim disconnected" id=632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008 namespace=k8s.io Jan 24 00:42:23.585281 containerd[1508]: time="2026-01-24T00:42:23.585177706Z" level=warning msg="cleaning up after shim disconnected" id=632943f25af523da71b537f884a2e69c09f13516ce3acef3f0f4f75203e93008 namespace=k8s.io Jan 24 00:42:23.587629 containerd[1508]: time="2026-01-24T00:42:23.585353013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:23.936301 sshd[4299]: Accepted publickey for core from 20.161.92.111 port 41958 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:42:23.939194 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:23.947570 systemd-logind[1476]: New session 24 of user core. Jan 24 00:42:23.953540 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:42:24.140131 containerd[1508]: time="2026-01-24T00:42:24.140059276Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:42:24.165717 containerd[1508]: time="2026-01-24T00:42:24.165651782Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179\"" Jan 24 00:42:24.168535 containerd[1508]: time="2026-01-24T00:42:24.166650149Z" level=info msg="StartContainer for \"381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179\"" Jan 24 00:42:24.228630 systemd[1]: Started cri-containerd-381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179.scope - libcontainer container 381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179. Jan 24 00:42:24.277837 containerd[1508]: time="2026-01-24T00:42:24.277794024Z" level=info msg="StartContainer for \"381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179\" returns successfully" Jan 24 00:42:24.291402 systemd[1]: cri-containerd-381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179.scope: Deactivated successfully. Jan 24 00:42:24.328644 containerd[1508]: time="2026-01-24T00:42:24.328535540Z" level=info msg="shim disconnected" id=381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179 namespace=k8s.io Jan 24 00:42:24.328644 containerd[1508]: time="2026-01-24T00:42:24.328600993Z" level=warning msg="cleaning up after shim disconnected" id=381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179 namespace=k8s.io Jan 24 00:42:24.328644 containerd[1508]: time="2026-01-24T00:42:24.328614383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:24.654169 kubelet[2539]: I0124 00:42:24.654086 2539 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-a6966cf543" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:42:24Z","lastTransitionTime":"2026-01-24T00:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 00:42:25.125831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381970cc81d0a70b69f45f086c29f2d8daede47f427201ce5e3fc95e885be179-rootfs.mount: Deactivated successfully. Jan 24 00:42:25.147428 containerd[1508]: time="2026-01-24T00:42:25.147295167Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:42:25.174727 containerd[1508]: time="2026-01-24T00:42:25.174660892Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c\"" Jan 24 00:42:25.178410 containerd[1508]: time="2026-01-24T00:42:25.175602847Z" level=info msg="StartContainer for \"2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c\"" Jan 24 00:42:25.176008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497080235.mount: Deactivated successfully. Jan 24 00:42:25.229570 systemd[1]: Started cri-containerd-2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c.scope - libcontainer container 2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c. Jan 24 00:42:25.279419 containerd[1508]: time="2026-01-24T00:42:25.279367941Z" level=info msg="StartContainer for \"2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c\" returns successfully" Jan 24 00:42:25.285740 systemd[1]: cri-containerd-2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c.scope: Deactivated successfully. Jan 24 00:42:25.326031 containerd[1508]: time="2026-01-24T00:42:25.325945959Z" level=info msg="shim disconnected" id=2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c namespace=k8s.io Jan 24 00:42:25.326031 containerd[1508]: time="2026-01-24T00:42:25.326016102Z" level=warning msg="cleaning up after shim disconnected" id=2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c namespace=k8s.io Jan 24 00:42:25.326031 containerd[1508]: time="2026-01-24T00:42:25.326031603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:26.127795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a7f21e1600a1b9dc8bd1c2bf375f1cab18420eefd61c658967cf85c8960b42c-rootfs.mount: Deactivated successfully. Jan 24 00:42:26.148941 containerd[1508]: time="2026-01-24T00:42:26.147451192Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:42:26.173914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118253743.mount: Deactivated successfully. Jan 24 00:42:26.176893 containerd[1508]: time="2026-01-24T00:42:26.176769155Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc\"" Jan 24 00:42:26.179031 containerd[1508]: time="2026-01-24T00:42:26.178948509Z" level=info msg="StartContainer for \"bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc\"" Jan 24 00:42:26.221640 systemd[1]: Started cri-containerd-bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc.scope - libcontainer container bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc. Jan 24 00:42:26.257071 systemd[1]: cri-containerd-bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc.scope: Deactivated successfully. Jan 24 00:42:26.259180 containerd[1508]: time="2026-01-24T00:42:26.258857955Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a9ff615_c0f3_49c6_8882_2fc147ad7d72.slice/cri-containerd-bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc.scope/memory.events\": no such file or directory" Jan 24 00:42:26.261999 containerd[1508]: time="2026-01-24T00:42:26.261813530Z" level=info msg="StartContainer for \"bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc\" returns successfully" Jan 24 00:42:26.284184 containerd[1508]: time="2026-01-24T00:42:26.283964166Z" level=info msg="shim disconnected" id=bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc namespace=k8s.io Jan 24 00:42:26.284184 containerd[1508]: time="2026-01-24T00:42:26.284039909Z" level=warning msg="cleaning up after shim disconnected" id=bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc namespace=k8s.io Jan 24 00:42:26.284184 containerd[1508]: time="2026-01-24T00:42:26.284047099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:26.791901 kubelet[2539]: E0124 00:42:26.791838 2539 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:42:27.128569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb83d9b71ae7b2320d9489cb9ccaf756871840b4fc0b5dd9dde36a97c4cfe3fc-rootfs.mount: Deactivated successfully. Jan 24 00:42:27.155867 containerd[1508]: time="2026-01-24T00:42:27.155646440Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:42:27.180034 containerd[1508]: time="2026-01-24T00:42:27.179785003Z" level=info msg="CreateContainer within sandbox \"3ff5d5260b84b5ac78e784a155d5c124feed6b7213b26533d7c13d713270261c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a\"" Jan 24 00:42:27.183709 containerd[1508]: time="2026-01-24T00:42:27.183653154Z" level=info msg="StartContainer for \"35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a\"" Jan 24 00:42:27.223604 systemd[1]: Started cri-containerd-35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a.scope - libcontainer container 35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a. Jan 24 00:42:27.256666 containerd[1508]: time="2026-01-24T00:42:27.256616593Z" level=info msg="StartContainer for \"35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a\" returns successfully" Jan 24 00:42:27.693411 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 00:42:28.178465 kubelet[2539]: I0124 00:42:28.177939 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxgpw" podStartSLOduration=7.17791905 podStartE2EDuration="7.17791905s" podCreationTimestamp="2026-01-24 00:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:42:28.17743235 +0000 UTC m=+156.624935409" watchObservedRunningTime="2026-01-24 00:42:28.17791905 +0000 UTC m=+156.625422109" Jan 24 00:42:30.983639 systemd-networkd[1403]: lxc_health: Link UP Jan 24 00:42:30.991830 systemd-networkd[1403]: lxc_health: Gained carrier Jan 24 00:42:32.564497 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 24 00:42:32.932714 systemd[1]: run-containerd-runc-k8s.io-35c7a1b844bb8463433d48ee86d9a0faa2d961092852419084efc773bbbed54a-runc.EPB1wY.mount: Deactivated successfully. Jan 24 00:42:37.425453 sshd[4299]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:37.432601 systemd[1]: sshd@23-157.180.47.226:22-20.161.92.111:41958.service: Deactivated successfully. Jan 24 00:42:37.437821 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:42:37.440682 systemd-logind[1476]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:42:37.442720 systemd-logind[1476]: Removed session 24. Jan 24 00:42:51.687171 containerd[1508]: time="2026-01-24T00:42:51.686851397Z" level=info msg="StopPodSandbox for \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\"" Jan 24 00:42:51.687171 containerd[1508]: time="2026-01-24T00:42:51.687032327Z" level=info msg="TearDown network for sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" successfully" Jan 24 00:42:51.687171 containerd[1508]: time="2026-01-24T00:42:51.687060998Z" level=info msg="StopPodSandbox for \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" returns successfully" Jan 24 00:42:51.688688 containerd[1508]: time="2026-01-24T00:42:51.688130627Z" level=info msg="RemovePodSandbox for \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\"" Jan 24 00:42:51.688688 containerd[1508]: time="2026-01-24T00:42:51.688174419Z" level=info msg="Forcibly stopping sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\"" Jan 24 00:42:51.688688 containerd[1508]: time="2026-01-24T00:42:51.688286704Z" level=info msg="TearDown network for sandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" successfully" Jan 24 00:42:51.697456 containerd[1508]: time="2026-01-24T00:42:51.696607946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:42:51.697456 containerd[1508]: time="2026-01-24T00:42:51.696676210Z" level=info msg="RemovePodSandbox \"01c764e6859236d507b26f95a1a4113493572585ad166493befeccf016443c3f\" returns successfully" Jan 24 00:42:51.698093 containerd[1508]: time="2026-01-24T00:42:51.698031622Z" level=info msg="StopPodSandbox for \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\"" Jan 24 00:42:51.698237 containerd[1508]: time="2026-01-24T00:42:51.698194869Z" level=info msg="TearDown network for sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" successfully" Jan 24 00:42:51.698237 containerd[1508]: time="2026-01-24T00:42:51.698226061Z" level=info msg="StopPodSandbox for \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" returns successfully" Jan 24 00:42:51.699029 containerd[1508]: time="2026-01-24T00:42:51.698830198Z" level=info msg="RemovePodSandbox for \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\"" Jan 24 00:42:51.699029 containerd[1508]: time="2026-01-24T00:42:51.698866510Z" level=info msg="Forcibly stopping sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\"" Jan 24 00:42:51.699029 containerd[1508]: time="2026-01-24T00:42:51.698960464Z" level=info msg="TearDown network for sandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" successfully" Jan 24 00:42:51.704438 containerd[1508]: time="2026-01-24T00:42:51.704371904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:42:51.704532 containerd[1508]: time="2026-01-24T00:42:51.704482389Z" level=info msg="RemovePodSandbox \"8889bc239ddb5a99ec179bc95d3f16c705f985493fd3b21ac5fe2ba0affc7e5e\" returns successfully" Jan 24 00:43:27.567960 systemd[1]: cri-containerd-7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce.scope: Deactivated successfully. Jan 24 00:43:27.568540 systemd[1]: cri-containerd-7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce.scope: Consumed 4.292s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 24 00:43:27.613952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce-rootfs.mount: Deactivated successfully. Jan 24 00:43:27.620762 containerd[1508]: time="2026-01-24T00:43:27.620667098Z" level=info msg="shim disconnected" id=7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce namespace=k8s.io Jan 24 00:43:27.620762 containerd[1508]: time="2026-01-24T00:43:27.620754263Z" level=warning msg="cleaning up after shim disconnected" id=7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce namespace=k8s.io Jan 24 00:43:27.622015 containerd[1508]: time="2026-01-24T00:43:27.620770972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:43:27.863449 kubelet[2539]: E0124 00:43:27.862949 2539 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55832->10.0.0.2:2379: read: connection timed out" Jan 24 00:43:27.871720 systemd[1]: cri-containerd-505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1.scope: Deactivated successfully. Jan 24 00:43:27.873434 systemd[1]: cri-containerd-505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1.scope: Consumed 2.414s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 24 00:43:27.912501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1-rootfs.mount: Deactivated successfully. Jan 24 00:43:27.923263 containerd[1508]: time="2026-01-24T00:43:27.923162104Z" level=info msg="shim disconnected" id=505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1 namespace=k8s.io Jan 24 00:43:27.923263 containerd[1508]: time="2026-01-24T00:43:27.923228251Z" level=warning msg="cleaning up after shim disconnected" id=505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1 namespace=k8s.io Jan 24 00:43:27.923263 containerd[1508]: time="2026-01-24T00:43:27.923244230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:43:28.290192 kubelet[2539]: I0124 00:43:28.289867 2539 scope.go:117] "RemoveContainer" containerID="505620a2b035aef438b29cb9b0f7cc05bf414a1ea4eea8dae89fef372ae0d5b1" Jan 24 00:43:28.291921 kubelet[2539]: I0124 00:43:28.291593 2539 scope.go:117] "RemoveContainer" containerID="7feb160ecc5a4cf383be2e749bb7e900dcbea613eddfc2d564447ccce7c976ce" Jan 24 00:43:28.293960 containerd[1508]: time="2026-01-24T00:43:28.293877005Z" level=info msg="CreateContainer within sandbox \"34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:43:28.294242 containerd[1508]: time="2026-01-24T00:43:28.293470505Z" level=info msg="CreateContainer within sandbox \"7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:43:28.317395 containerd[1508]: time="2026-01-24T00:43:28.316014989Z" level=info msg="CreateContainer within sandbox \"7b03548a2244bed01d940629c4e06c60850d59d46052ba24e4980b467cd772bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0507bf0610ee88a8dcedd74b9ff3cf9a65c0c026104034a9b15874c224c51f86\"" Jan 24 00:43:28.318954 containerd[1508]: time="2026-01-24T00:43:28.318880766Z" level=info msg="StartContainer for \"0507bf0610ee88a8dcedd74b9ff3cf9a65c0c026104034a9b15874c224c51f86\"" Jan 24 00:43:28.321069 containerd[1508]: time="2026-01-24T00:43:28.319177670Z" level=info msg="CreateContainer within sandbox \"34a62355e7b94fc37dc948f5061330e92977605fac3fb75b09be976a09997d91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1adedbd6e158b039cb3a7b3367d1ba7019ff8a0915fbe2f722adb06971e36873\"" Jan 24 00:43:28.323219 containerd[1508]: time="2026-01-24T00:43:28.323142020Z" level=info msg="StartContainer for \"1adedbd6e158b039cb3a7b3367d1ba7019ff8a0915fbe2f722adb06971e36873\"" Jan 24 00:43:28.397537 systemd[1]: Started cri-containerd-0507bf0610ee88a8dcedd74b9ff3cf9a65c0c026104034a9b15874c224c51f86.scope - libcontainer container 0507bf0610ee88a8dcedd74b9ff3cf9a65c0c026104034a9b15874c224c51f86. Jan 24 00:43:28.400472 systemd[1]: Started cri-containerd-1adedbd6e158b039cb3a7b3367d1ba7019ff8a0915fbe2f722adb06971e36873.scope - libcontainer container 1adedbd6e158b039cb3a7b3367d1ba7019ff8a0915fbe2f722adb06971e36873. Jan 24 00:43:28.490262 containerd[1508]: time="2026-01-24T00:43:28.489698684Z" level=info msg="StartContainer for \"1adedbd6e158b039cb3a7b3367d1ba7019ff8a0915fbe2f722adb06971e36873\" returns successfully" Jan 24 00:43:28.496149 containerd[1508]: time="2026-01-24T00:43:28.496093665Z" level=info msg="StartContainer for \"0507bf0610ee88a8dcedd74b9ff3cf9a65c0c026104034a9b15874c224c51f86\" returns successfully" Jan 24 00:43:30.806869 kubelet[2539]: E0124 00:43:30.806667 2539 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55640->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-a6966cf543.188d84086e51b2bd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-a6966cf543,UID:c6366daadf73f8b939916777fa13e28a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a6966cf543,},FirstTimestamp:2026-01-24 00:43:20.371925693 +0000 UTC m=+208.819428722,LastTimestamp:2026-01-24 00:43:20.371925693 +0000 UTC m=+208.819428722,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a6966cf543,}"