Jan 17 00:18:56.096259 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:18:56.096280 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:18:56.096290 kernel: BIOS-provided physical RAM map: Jan 17 00:18:56.096295 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:18:56.096299 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 17 00:18:56.096303 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 17 00:18:56.096309 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 17 00:18:56.096313 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 17 00:18:56.096318 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 17 00:18:56.096322 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 17 00:18:56.096327 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 17 00:18:56.096334 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 17 00:18:56.096338 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 17 00:18:56.096343 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 17 00:18:56.096348 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 17 00:18:56.096353 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:18:56.096360 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 17 00:18:56.096365 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 17 00:18:56.096369 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 00:18:56.096374 kernel: NX (Execute Disable) protection: active Jan 17 00:18:56.096379 kernel: APIC: Static calls initialized Jan 17 00:18:56.096384 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:18:56.096388 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Jan 17 00:18:56.096393 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 17 00:18:56.096398 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 17 00:18:56.096403 kernel: SMBIOS 3.0.0 present. Jan 17 00:18:56.096408 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 17 00:18:56.096412 kernel: Hypervisor detected: KVM Jan 17 00:18:56.096419 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:18:56.096424 kernel: kvm-clock: using sched offset of 12412859281 cycles Jan 17 00:18:56.096429 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:18:56.096434 kernel: tsc: Detected 2399.998 MHz processor Jan 17 00:18:56.096439 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:18:56.096444 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:18:56.096449 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 17 00:18:56.096454 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:18:56.096459 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:18:56.096466 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 17 00:18:56.096471 kernel: Using GB pages for direct mapping Jan 17 00:18:56.096476 kernel: Secure boot disabled Jan 17 00:18:56.096484 kernel: ACPI: Early table checksum verification disabled Jan 17 00:18:56.096489 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 17 00:18:56.096494 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:18:56.096499 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096507 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096512 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 17 00:18:56.096517 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096522 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096527 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096532 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:18:56.096537 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:18:56.096544 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 17 00:18:56.096549 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 17 00:18:56.096554 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 17 00:18:56.096559 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 17 00:18:56.096565 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 17 00:18:56.096569 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 17 00:18:56.096574 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 17 00:18:56.096579 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 17 00:18:56.096584 kernel: No NUMA configuration found Jan 17 00:18:56.096592 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 17 00:18:56.096597 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Jan 17 00:18:56.096602 kernel: Zone ranges: Jan 17 00:18:56.096608 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:18:56.096613 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:18:56.096617 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:18:56.096623 kernel: Movable zone start for each node Jan 17 00:18:56.096628 kernel: Early memory node ranges Jan 17 00:18:56.096633 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:18:56.096638 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 17 00:18:56.096645 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 17 00:18:56.096650 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 17 00:18:56.096656 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:18:56.096660 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 17 00:18:56.096666 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:18:56.096671 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:18:56.096676 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 17 00:18:56.096681 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:18:56.096686 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 17 00:18:56.096693 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:18:56.096698 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:18:56.096704 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:18:56.096709 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:18:56.096714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:18:56.096719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:18:56.096724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:18:56.096729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:18:56.096734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:18:56.096741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:18:56.096746 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:18:56.096751 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:18:56.096756 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:18:56.096761 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 00:18:56.096766 kernel: Booting paravirtualized kernel on KVM Jan 17 00:18:56.096772 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:18:56.096777 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:18:56.096782 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:18:56.096789 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:18:56.096794 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:18:56.096801 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:18:56.096809 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:18:56.096819 kernel: random: crng init done Jan 17 00:18:56.096829 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:18:56.096837 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:18:56.096862 kernel: Fallback order for Node 0: 0 Jan 17 00:18:56.096880 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 17 00:18:56.096888 kernel: Policy zone: Normal Jan 17 00:18:56.096896 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:18:56.096901 kernel: software IO TLB: area num 2. Jan 17 00:18:56.096907 kernel: Memory: 3827832K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263132K reserved, 0K cma-reserved) Jan 17 00:18:56.096912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:18:56.096917 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:18:56.096922 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:18:56.096927 kernel: Dynamic Preempt: voluntary Jan 17 00:18:56.096935 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:18:56.096940 kernel: rcu: RCU event tracing is enabled. Jan 17 00:18:56.096945 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:18:56.096951 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:18:56.096963 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:18:56.096974 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:18:56.096982 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:18:56.096990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:18:56.096996 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:18:56.097001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:18:56.097006 kernel: Console: colour dummy device 80x25 Jan 17 00:18:56.097012 kernel: printk: console [tty0] enabled Jan 17 00:18:56.097020 kernel: printk: console [ttyS0] enabled Jan 17 00:18:56.097025 kernel: ACPI: Core revision 20230628 Jan 17 00:18:56.097030 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:18:56.097036 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:18:56.097042 kernel: x2apic enabled Jan 17 00:18:56.097050 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:18:56.097058 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:18:56.097063 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:18:56.097069 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 17 00:18:56.097074 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:18:56.097079 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:18:56.097085 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:18:56.097090 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:18:56.097095 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 00:18:56.097103 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:18:56.097108 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:18:56.097114 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:18:56.097119 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 17 00:18:56.097128 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:18:56.097133 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:18:56.097139 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:18:56.097144 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:18:56.097149 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:18:56.097157 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:18:56.097163 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:18:56.097168 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:18:56.097173 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:18:56.097179 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:18:56.097184 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:18:56.097189 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:18:56.097195 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:18:56.097203 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 17 00:18:56.097211 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 17 00:18:56.097217 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:18:56.097222 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:18:56.097228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:18:56.097233 kernel: landlock: Up and running. Jan 17 00:18:56.097238 kernel: SELinux: Initializing. Jan 17 00:18:56.097243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:18:56.097249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:18:56.097254 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 17 00:18:56.097262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:18:56.097267 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:18:56.097273 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:18:56.097281 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 00:18:56.097288 kernel: ... version: 0 Jan 17 00:18:56.097293 kernel: ... bit width: 48 Jan 17 00:18:56.097299 kernel: ... generic registers: 6 Jan 17 00:18:56.097304 kernel: ... value mask: 0000ffffffffffff Jan 17 00:18:56.097309 kernel: ... max period: 00007fffffffffff Jan 17 00:18:56.097317 kernel: ... fixed-purpose events: 0 Jan 17 00:18:56.097322 kernel: ... event mask: 000000000000003f Jan 17 00:18:56.097328 kernel: signal: max sigframe size: 3376 Jan 17 00:18:56.097333 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:18:56.097338 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:18:56.097344 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:18:56.097349 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:18:56.097354 kernel: .... node #0, CPUs: #1 Jan 17 00:18:56.097359 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:18:56.097367 kernel: smpboot: Max logical packages: 1 Jan 17 00:18:56.097372 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 17 00:18:56.097377 kernel: devtmpfs: initialized Jan 17 00:18:56.097382 kernel: x86/mm: Memory block size: 128MB Jan 17 00:18:56.097388 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 17 00:18:56.097393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:18:56.097399 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:18:56.097404 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:18:56.097409 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:18:56.097417 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:18:56.097422 kernel: audit: type=2000 audit(1768609134.728:1): state=initialized audit_enabled=0 res=1 Jan 17 00:18:56.097427 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:18:56.097432 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:18:56.097437 kernel: cpuidle: using governor menu Jan 17 00:18:56.097443 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:18:56.097448 kernel: dca service started, version 1.12.1 Jan 17 00:18:56.097453 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 17 00:18:56.097458 kernel: PCI: Using configuration type 1 for base access Jan 17 00:18:56.097466 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:18:56.097471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:18:56.097477 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:18:56.097482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:18:56.097487 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:18:56.097492 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:18:56.097498 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:18:56.097503 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:18:56.097508 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:18:56.097516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:18:56.097521 kernel: ACPI: Interpreter enabled Jan 17 00:18:56.097526 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:18:56.097531 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:18:56.097536 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:18:56.097542 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:18:56.097547 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:18:56.097553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:18:56.097712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:18:56.097830 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:18:56.097974 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:18:56.097982 kernel: PCI host bridge to bus 0000:00 Jan 17 00:18:56.098088 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:18:56.098183 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:18:56.098272 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:18:56.098364 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 17 00:18:56.098450 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 17 00:18:56.098538 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:18:56.098625 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:18:56.098738 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:18:56.098856 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:18:56.098978 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 17 00:18:56.099079 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 17 00:18:56.099176 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 17 00:18:56.099272 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:18:56.099368 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:18:56.099464 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:18:56.099570 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.099669 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 17 00:18:56.099772 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.099895 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 17 00:18:56.099997 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.100092 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 17 00:18:56.100192 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.100291 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 17 00:18:56.100392 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.100487 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 17 00:18:56.100589 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.100689 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 17 00:18:56.100792 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.100925 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 17 00:18:56.101033 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.101128 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 17 00:18:56.101229 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:18:56.101324 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 17 00:18:56.101427 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:18:56.101522 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:18:56.101625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:18:56.101721 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 17 00:18:56.101815 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 17 00:18:56.101963 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:18:56.102059 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 17 00:18:56.102166 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:18:56.102268 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 17 00:18:56.102366 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 17 00:18:56.102465 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:18:56.102563 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:18:56.102658 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:18:56.102752 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:18:56.102870 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:18:56.102981 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 17 00:18:56.103078 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:18:56.103173 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:18:56.103279 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:18:56.103378 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 17 00:18:56.103477 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 17 00:18:56.103574 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:18:56.103671 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:18:56.103766 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:18:56.103900 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:18:56.104003 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 17 00:18:56.104100 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:18:56.104213 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:18:56.104331 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:18:56.104437 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 17 00:18:56.104537 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 17 00:18:56.104633 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:18:56.104728 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:18:56.104824 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:18:56.105013 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:18:56.105114 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 17 00:18:56.105218 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 17 00:18:56.105314 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:18:56.105409 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:18:56.105503 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:18:56.105509 kernel: acpiphp: Slot [0] registered Jan 17 00:18:56.105615 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:18:56.105714 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 17 00:18:56.105813 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 00:18:56.105947 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:18:56.106045 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:18:56.106140 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:18:56.106234 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:18:56.106241 kernel: acpiphp: Slot [0-2] registered Jan 17 00:18:56.106337 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:18:56.106431 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:18:56.106526 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:18:56.106535 kernel: acpiphp: Slot [0-3] registered Jan 17 00:18:56.106632 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:18:56.106728 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:18:56.106823 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:18:56.106829 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:18:56.106835 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:18:56.108858 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:18:56.108880 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:18:56.108890 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:18:56.108896 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:18:56.108901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:18:56.108907 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:18:56.108912 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:18:56.108918 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:18:56.108924 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:18:56.108930 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:18:56.108935 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:18:56.108944 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:18:56.108949 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:18:56.108955 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:18:56.108960 kernel: iommu: Default domain type: Translated Jan 17 00:18:56.108966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:18:56.108971 kernel: efivars: Registered efivars operations Jan 17 00:18:56.108977 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:18:56.108982 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:18:56.108988 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 17 00:18:56.108996 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 17 00:18:56.109001 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 17 00:18:56.109007 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 17 00:18:56.109129 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:18:56.109228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:18:56.109324 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:18:56.109331 kernel: vgaarb: loaded Jan 17 00:18:56.109337 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:18:56.109342 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:18:56.109350 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:18:56.109356 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:18:56.109361 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:18:56.109367 kernel: pnp: PnP ACPI init Jan 17 00:18:56.109474 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 17 00:18:56.109482 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:18:56.109487 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:18:56.109493 kernel: NET: Registered PF_INET protocol family Jan 17 00:18:56.109514 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:18:56.109522 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:18:56.109528 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:18:56.109534 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:18:56.109540 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:18:56.109545 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:18:56.109551 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:18:56.109557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:18:56.109562 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:18:56.109570 kernel: NET: Registered PF_XDP protocol family Jan 17 00:18:56.109677 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:18:56.109781 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:18:56.109909 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:18:56.110008 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:18:56.110107 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:18:56.110204 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:18:56.110303 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:18:56.110400 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:18:56.110502 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 17 00:18:56.110599 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:18:56.110698 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:18:56.110794 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:18:56.110908 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:18:56.111003 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:18:56.111102 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:18:56.111199 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:18:56.111294 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:18:56.111392 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:18:56.111488 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:18:56.111589 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:18:56.111686 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:18:56.111782 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:18:56.113935 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:18:56.114050 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:18:56.114149 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:18:56.114253 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 17 00:18:56.114375 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:18:56.114479 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 17 00:18:56.114578 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:18:56.114673 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:18:56.114770 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:18:56.116900 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 17 00:18:56.117008 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:18:56.117111 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:18:56.117209 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:18:56.117306 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 17 00:18:56.117433 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:18:56.117540 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:18:56.117637 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:18:56.117725 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:18:56.117820 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:18:56.117973 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 17 00:18:56.118062 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 17 00:18:56.118149 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:18:56.118254 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 17 00:18:56.118348 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:18:56.118448 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 17 00:18:56.118551 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 17 00:18:56.118650 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:18:56.118749 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:18:56.118862 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 17 00:18:56.118964 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:18:56.119063 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 17 00:18:56.119159 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:18:56.119257 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 17 00:18:56.119350 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 17 00:18:56.119443 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:18:56.119544 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 17 00:18:56.119637 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 17 00:18:56.119729 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:18:56.119832 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 17 00:18:56.122939 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 17 00:18:56.123069 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:18:56.123082 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:18:56.123089 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:18:56.123095 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:18:56.123101 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 17 00:18:56.123107 kernel: Initialise system trusted keyrings Jan 17 00:18:56.123120 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:18:56.123128 kernel: Key type asymmetric registered Jan 17 00:18:56.123134 kernel: Asymmetric key parser 'x509' registered Jan 17 00:18:56.123140 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:18:56.123145 kernel: io scheduler mq-deadline registered Jan 17 00:18:56.123151 kernel: io scheduler kyber registered Jan 17 00:18:56.123156 kernel: io scheduler bfq registered Jan 17 00:18:56.123267 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 00:18:56.123376 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 00:18:56.123480 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 00:18:56.123577 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 00:18:56.123674 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 00:18:56.123769 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 00:18:56.123898 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 00:18:56.123998 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 00:18:56.124095 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 00:18:56.124194 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 00:18:56.124294 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 00:18:56.124395 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 00:18:56.124511 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 00:18:56.124609 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 00:18:56.124706 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 00:18:56.124802 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 00:18:56.124809 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:18:56.127025 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 17 00:18:56.127139 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 17 00:18:56.127147 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:18:56.127153 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 17 00:18:56.127159 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:18:56.127165 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:18:56.127170 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:18:56.127176 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:18:56.127185 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:18:56.127288 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:18:56.127299 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:18:56.127389 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:18:56.127480 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:18:55 UTC (1768609135) Jan 17 00:18:56.127572 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:18:56.127578 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:18:56.127585 kernel: efifb: probing for efifb Jan 17 00:18:56.127591 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 17 00:18:56.127596 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 17 00:18:56.127605 kernel: efifb: scrolling: redraw Jan 17 00:18:56.127610 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:18:56.127616 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:18:56.127622 kernel: fb0: EFI VGA frame buffer device Jan 17 00:18:56.127628 kernel: pstore: Using crash dump compression: deflate Jan 17 00:18:56.127633 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:18:56.127639 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:18:56.127644 kernel: Segment Routing with IPv6 Jan 17 00:18:56.127650 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:18:56.127658 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:18:56.127664 kernel: Key type dns_resolver registered Jan 17 00:18:56.127669 kernel: IPI shorthand broadcast: enabled Jan 17 00:18:56.127675 kernel: sched_clock: Marking stable (1332010857, 190352643)->(1564687733, -42324233) Jan 17 00:18:56.127681 kernel: registered taskstats version 1 Jan 17 00:18:56.127686 kernel: Loading compiled-in X.509 certificates Jan 17 00:18:56.127692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:18:56.127698 kernel: Key type .fscrypt registered Jan 17 00:18:56.127703 kernel: Key type fscrypt-provisioning registered Jan 17 00:18:56.127711 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:18:56.127717 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:18:56.127722 kernel: ima: No architecture policies found Jan 17 00:18:56.127728 kernel: clk: Disabling unused clocks Jan 17 00:18:56.127733 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:18:56.127739 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:18:56.127745 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:18:56.127750 kernel: Run /init as init process Jan 17 00:18:56.127756 kernel: with arguments: Jan 17 00:18:56.127764 kernel: /init Jan 17 00:18:56.127770 kernel: with environment: Jan 17 00:18:56.127775 kernel: HOME=/ Jan 17 00:18:56.127781 kernel: TERM=linux Jan 17 00:18:56.127788 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:18:56.127796 systemd[1]: Detected virtualization kvm. Jan 17 00:18:56.127802 systemd[1]: Detected architecture x86-64. Jan 17 00:18:56.127810 systemd[1]: Running in initrd. Jan 17 00:18:56.127816 systemd[1]: No hostname configured, using default hostname. Jan 17 00:18:56.127822 systemd[1]: Hostname set to . Jan 17 00:18:56.127828 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:18:56.127834 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:18:56.127851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:18:56.127857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:18:56.127864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:18:56.127879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:18:56.127885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:18:56.127891 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:18:56.127898 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:18:56.127904 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:18:56.127910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:18:56.127916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:18:56.127927 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:18:56.127933 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:18:56.127939 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:18:56.127945 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:18:56.127951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:18:56.127956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:18:56.127962 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:18:56.127968 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:18:56.127977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:18:56.127983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:18:56.127989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:18:56.127994 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:18:56.128000 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:18:56.128006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:18:56.128012 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:18:56.128018 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:18:56.128024 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:18:56.128032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:18:56.128057 systemd-journald[188]: Collecting audit messages is disabled. Jan 17 00:18:56.128073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:18:56.128079 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:18:56.128088 systemd-journald[188]: Journal started Jan 17 00:18:56.128102 systemd-journald[188]: Runtime Journal (/run/log/journal/f6baff18ea694b77bd1dcc842800ce54) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:18:56.129618 systemd-modules-load[189]: Inserted module 'overlay' Jan 17 00:18:56.134362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:18:56.140283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:18:56.143369 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:18:56.154931 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:18:56.155115 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:18:56.157865 kernel: Bridge firewalling registered Jan 17 00:18:56.157984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:18:56.159903 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 17 00:18:56.161230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:18:56.162255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:18:56.163403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:18:56.177045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:18:56.178323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:18:56.180978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:18:56.182031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:18:56.195041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:18:56.196286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:18:56.203416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:18:56.205980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:18:56.207902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:18:56.212427 dracut-cmdline[222]: dracut-dracut-053 Jan 17 00:18:56.214821 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:18:56.241989 systemd-resolved[223]: Positive Trust Anchors: Jan 17 00:18:56.242002 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:18:56.242024 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:18:56.246110 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 17 00:18:56.248284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:18:56.248792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:18:56.277871 kernel: SCSI subsystem initialized Jan 17 00:18:56.285908 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:18:56.294871 kernel: iscsi: registered transport (tcp) Jan 17 00:18:56.328028 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:18:56.328122 kernel: QLogic iSCSI HBA Driver Jan 17 00:18:56.378946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:18:56.386079 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:18:56.427199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:18:56.427289 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:18:56.430986 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:18:56.490901 kernel: raid6: avx512x4 gen() 24291 MB/s Jan 17 00:18:56.508894 kernel: raid6: avx512x2 gen() 32416 MB/s Jan 17 00:18:56.526888 kernel: raid6: avx512x1 gen() 42915 MB/s Jan 17 00:18:56.544900 kernel: raid6: avx2x4 gen() 46492 MB/s Jan 17 00:18:56.562904 kernel: raid6: avx2x2 gen() 47869 MB/s Jan 17 00:18:56.581669 kernel: raid6: avx2x1 gen() 38427 MB/s Jan 17 00:18:56.581748 kernel: raid6: using algorithm avx2x2 gen() 47869 MB/s Jan 17 00:18:56.600721 kernel: raid6: .... xor() 37281 MB/s, rmw enabled Jan 17 00:18:56.600807 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:18:56.648977 kernel: xor: automatically using best checksumming function avx Jan 17 00:18:56.805892 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:18:56.820183 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:18:56.827130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:18:56.837353 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jan 17 00:18:56.841302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:18:56.849136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:18:56.862915 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 17 00:18:56.904819 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:18:56.913014 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:18:57.011629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:18:57.020129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:18:57.053651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:18:57.058346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:18:57.060500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:18:57.062423 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:18:57.071383 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:18:57.093837 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:18:57.102239 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:18:57.104901 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:18:57.121862 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:18:57.145674 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:18:57.145796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:18:57.147189 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:18:57.147530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:18:57.149178 kernel: libata version 3.00 loaded. Jan 17 00:18:57.147663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:18:57.149582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:18:57.158095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:18:57.175200 kernel: ACPI: bus type USB registered Jan 17 00:18:57.175221 kernel: usbcore: registered new interface driver usbfs Jan 17 00:18:57.175231 kernel: usbcore: registered new interface driver hub Jan 17 00:18:57.175239 kernel: usbcore: registered new device driver usb Jan 17 00:18:57.175247 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:18:57.175254 kernel: AES CTR mode by8 optimization enabled Jan 17 00:18:57.175262 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:18:57.175612 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:18:57.175649 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:18:57.176001 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:18:57.159544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:18:57.185240 kernel: scsi host1: ahci Jan 17 00:18:57.185390 kernel: scsi host2: ahci Jan 17 00:18:57.159648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:18:57.191907 kernel: scsi host3: ahci Jan 17 00:18:57.194288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:18:57.214862 kernel: scsi host4: ahci Jan 17 00:18:57.216864 kernel: scsi host5: ahci Jan 17 00:18:57.223700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:18:57.226322 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:18:57.226540 kernel: scsi host6: ahci Jan 17 00:18:57.226562 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:18:57.232869 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:18:57.233079 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 42 Jan 17 00:18:57.235022 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 42 Jan 17 00:18:57.235058 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 42 Jan 17 00:18:57.237906 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 42 Jan 17 00:18:57.237955 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 42 Jan 17 00:18:57.237964 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 42 Jan 17 00:18:57.237973 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:18:57.238139 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:18:57.238259 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:18:57.242221 kernel: hub 1-0:1.0: USB hub found Jan 17 00:18:57.242448 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:18:57.251414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:18:57.257453 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:18:57.257695 kernel: hub 2-0:1.0: USB hub found Jan 17 00:18:57.257830 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:18:57.268635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:18:57.496970 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:18:57.552872 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:18:57.552985 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:18:57.558342 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:18:57.558879 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:18:57.567917 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:18:57.567973 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 00:18:57.571937 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:18:57.575985 kernel: ata1.00: applying bridge limits Jan 17 00:18:57.579380 kernel: ata1.00: configured for UDMA/100 Jan 17 00:18:57.588941 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:18:57.625582 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 17 00:18:57.626965 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 17 00:18:57.632940 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:18:57.635960 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 17 00:18:57.636429 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:18:57.653906 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:18:57.653980 kernel: GPT:17805311 != 160006143 Jan 17 00:18:57.654001 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:18:57.660545 kernel: GPT:17805311 != 160006143 Jan 17 00:18:57.660612 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:18:57.664910 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:18:57.664951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:18:57.673693 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:18:57.692168 kernel: usbcore: registered new interface driver usbhid Jan 17 00:18:57.692262 kernel: usbhid: USB HID core driver Jan 17 00:18:57.702085 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:18:57.702559 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:18:57.714908 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 00:18:57.736954 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:18:57.750828 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (448) Jan 17 00:18:57.750910 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (457) Jan 17 00:18:57.757924 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:18:57.770245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:18:57.779678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:18:57.792257 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:18:57.792634 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:18:57.797316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:18:57.802965 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:18:57.808680 disk-uuid[579]: Primary Header is updated. Jan 17 00:18:57.808680 disk-uuid[579]: Secondary Entries is updated. Jan 17 00:18:57.808680 disk-uuid[579]: Secondary Header is updated. Jan 17 00:18:57.813933 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:18:57.820869 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:18:58.828002 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:18:58.831559 disk-uuid[580]: The operation has completed successfully. Jan 17 00:18:58.925168 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:18:58.925353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:18:58.948103 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:18:58.967483 sh[597]: Success Jan 17 00:18:58.994936 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:18:59.076203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:18:59.092042 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:18:59.098626 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:18:59.142210 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:18:59.142287 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:18:59.153075 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:18:59.153125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:18:59.162238 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:18:59.175916 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:18:59.179059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:18:59.182207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:18:59.187100 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:18:59.192087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:18:59.227247 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:18:59.227335 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:18:59.231424 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:18:59.248528 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:18:59.248612 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:18:59.269593 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:18:59.277795 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:18:59.286566 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:18:59.294107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:18:59.424978 ignition[704]: Ignition 2.19.0 Jan 17 00:18:59.425000 ignition[704]: Stage: fetch-offline Jan 17 00:18:59.425067 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:18:59.429327 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:18:59.425083 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:18:59.425213 ignition[704]: parsed url from cmdline: "" Jan 17 00:18:59.425220 ignition[704]: no config URL provided Jan 17 00:18:59.425229 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:18:59.425244 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:18:59.425253 ignition[704]: failed to fetch config: resource requires networking Jan 17 00:18:59.426188 ignition[704]: Ignition finished successfully Jan 17 00:18:59.438479 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:18:59.445084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:18:59.491468 systemd-networkd[785]: lo: Link UP Jan 17 00:18:59.491480 systemd-networkd[785]: lo: Gained carrier Jan 17 00:18:59.494010 systemd-networkd[785]: Enumeration completed Jan 17 00:18:59.494607 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:18:59.495191 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:18:59.495195 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:18:59.496100 systemd[1]: Reached target network.target - Network. Jan 17 00:18:59.496185 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:18:59.496188 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:18:59.496807 systemd-networkd[785]: eth0: Link UP Jan 17 00:18:59.496811 systemd-networkd[785]: eth0: Gained carrier Jan 17 00:18:59.496818 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:18:59.500093 systemd-networkd[785]: eth1: Link UP Jan 17 00:18:59.500097 systemd-networkd[785]: eth1: Gained carrier Jan 17 00:18:59.500105 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:18:59.508953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:18:59.530959 ignition[788]: Ignition 2.19.0 Jan 17 00:18:59.530990 ignition[788]: Stage: fetch Jan 17 00:18:59.531261 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:18:59.531283 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:18:59.531434 ignition[788]: parsed url from cmdline: "" Jan 17 00:18:59.531447 ignition[788]: no config URL provided Jan 17 00:18:59.531462 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:18:59.531481 ignition[788]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:18:59.531511 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:18:59.531763 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:18:59.540904 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:18:59.563914 systemd-networkd[785]: eth0: DHCPv4 address 46.62.250.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:18:59.731988 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:18:59.737806 ignition[788]: GET result: OK Jan 17 00:18:59.737963 ignition[788]: parsing config with SHA512: 2768ec4a837bb3dec2c02403f65cf51bf7d61853d28892e9873610c367536b4e65f982f7d1fab1de2c280c580ee48ac382e10abb0e2c5268d338e887a69bd21f Jan 17 00:18:59.743200 unknown[788]: fetched base config from "system" Jan 17 00:18:59.743221 unknown[788]: fetched base config from "system" Jan 17 00:18:59.744532 ignition[788]: fetch: fetch complete Jan 17 00:18:59.743232 unknown[788]: fetched user config from "hetzner" Jan 17 00:18:59.744546 ignition[788]: fetch: fetch passed Jan 17 00:18:59.744661 ignition[788]: Ignition finished successfully Jan 17 00:18:59.748611 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:18:59.756116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:18:59.794801 ignition[795]: Ignition 2.19.0 Jan 17 00:18:59.794823 ignition[795]: Stage: kargs Jan 17 00:18:59.795166 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:18:59.795189 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:18:59.800456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:18:59.796532 ignition[795]: kargs: kargs passed Jan 17 00:18:59.796614 ignition[795]: Ignition finished successfully Jan 17 00:18:59.809188 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:18:59.835305 ignition[802]: Ignition 2.19.0 Jan 17 00:18:59.835338 ignition[802]: Stage: disks Jan 17 00:18:59.835620 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:18:59.841106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:18:59.835642 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:18:59.844095 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:18:59.837083 ignition[802]: disks: disks passed Jan 17 00:18:59.846554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:18:59.837168 ignition[802]: Ignition finished successfully Jan 17 00:18:59.848607 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:18:59.850473 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:18:59.852063 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:18:59.860258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:18:59.888972 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:18:59.893789 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:18:59.900974 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:18:59.993872 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:18:59.995239 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:18:59.996960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:00.004074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:00.006542 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:19:00.007914 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:19:00.011114 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:19:00.012093 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:00.015949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Jan 17 00:19:00.023866 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:00.023890 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:00.023908 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:19:00.042272 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:19:00.042356 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:19:00.042216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:19:00.049526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:00.062057 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:19:00.097415 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:19:00.102649 coreos-metadata[821]: Jan 17 00:19:00.102 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:19:00.103863 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:19:00.106514 coreos-metadata[821]: Jan 17 00:19:00.104 INFO Fetch successful Jan 17 00:19:00.106514 coreos-metadata[821]: Jan 17 00:19:00.105 INFO wrote hostname ci-4081-3-6-n-9d03cc5a8b to /sysroot/etc/hostname Jan 17 00:19:00.108751 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:00.112558 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:19:00.118354 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:19:00.248114 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:00.255047 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:19:00.264161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:19:00.283261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:19:00.289769 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:00.319375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:19:00.335377 ignition[937]: INFO : Ignition 2.19.0 Jan 17 00:19:00.335377 ignition[937]: INFO : Stage: mount Jan 17 00:19:00.337884 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:00.337884 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:00.337884 ignition[937]: INFO : mount: mount passed Jan 17 00:19:00.337884 ignition[937]: INFO : Ignition finished successfully Jan 17 00:19:00.341093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:19:00.348022 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:19:00.376125 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:00.406959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 17 00:19:00.414491 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:00.414541 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:00.422870 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:19:00.429030 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:19:00.429118 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:19:00.433937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:00.468745 ignition[964]: INFO : Ignition 2.19.0 Jan 17 00:19:00.468745 ignition[964]: INFO : Stage: files Jan 17 00:19:00.469749 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:00.469749 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:00.470702 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:19:00.471967 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:19:00.472372 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:19:00.476427 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:19:00.476776 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:19:00.477340 unknown[964]: wrote ssh authorized keys file for user: core Jan 17 00:19:00.477864 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:19:00.479937 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:00.480542 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:19:00.718743 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:19:00.906430 systemd-networkd[785]: eth0: Gained IPv6LL Jan 17 00:19:01.024738 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:01.026687 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:19:01.026687 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:19:01.098234 systemd-networkd[785]: eth1: Gained IPv6LL Jan 17 00:19:01.412634 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:19:02.132770 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:02.134388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:02.142786 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:02.142786 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:02.142786 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:02.142786 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:02.142786 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:19:02.517078 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:19:06.308997 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:06.308997 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 00:19:06.311215 ignition[964]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:06.324792 ignition[964]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:06.324792 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:06.324792 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:06.324792 ignition[964]: INFO : files: files passed Jan 17 00:19:06.324792 ignition[964]: INFO : Ignition finished successfully Jan 17 00:19:06.314308 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:19:06.320107 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:19:06.329050 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:19:06.332813 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:19:06.335480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:19:06.347959 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:06.347959 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:06.351016 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:06.353723 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:06.355742 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:19:06.369195 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:19:06.414623 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:19:06.414875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:19:06.417388 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:19:06.418586 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:19:06.420385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:19:06.426132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:19:06.444861 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:06.448972 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:19:06.470211 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:06.471007 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:06.472064 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:19:06.473003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:19:06.473151 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:06.474342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:19:06.475311 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:19:06.476199 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:19:06.477102 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:06.477940 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:06.478870 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:19:06.479779 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:06.480737 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:19:06.481666 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:19:06.482604 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:19:06.483500 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:19:06.483642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:06.484823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:06.485772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:06.486599 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:19:06.486726 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:06.487521 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:19:06.487644 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:06.488822 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:19:06.488989 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:06.489729 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:19:06.489839 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:19:06.490608 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:19:06.490712 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:06.500060 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:19:06.500998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:19:06.501130 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:06.505133 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:19:06.505886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:19:06.506380 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:06.506798 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:19:06.506885 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:06.512267 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:19:06.512710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:19:06.516260 ignition[1018]: INFO : Ignition 2.19.0 Jan 17 00:19:06.516260 ignition[1018]: INFO : Stage: umount Jan 17 00:19:06.519990 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:06.519990 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:06.519990 ignition[1018]: INFO : umount: umount passed Jan 17 00:19:06.519990 ignition[1018]: INFO : Ignition finished successfully Jan 17 00:19:06.520571 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:19:06.520658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:19:06.521455 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:19:06.521530 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:19:06.521959 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:19:06.521997 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:19:06.522366 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:19:06.522399 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:19:06.522749 systemd[1]: Stopped target network.target - Network. Jan 17 00:19:06.523350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:19:06.523392 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:06.524254 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:19:06.525400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:19:06.530907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:06.533267 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:19:06.534057 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:19:06.534991 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:19:06.535037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:06.535775 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:19:06.535814 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:06.536577 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:19:06.536619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:19:06.537399 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:19:06.537437 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:06.539460 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:19:06.540492 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:19:06.542276 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:19:06.546896 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 17 00:19:06.548379 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:19:06.549648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:19:06.549899 systemd-networkd[785]: eth1: DHCPv6 lease lost Jan 17 00:19:06.551543 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:19:06.551639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:19:06.553671 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:19:06.553790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:19:06.555466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:19:06.555714 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:06.556238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:19:06.556279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:06.563928 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:19:06.564267 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:19:06.564313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:06.564664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:19:06.564699 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:06.565080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:19:06.565118 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:06.565449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:19:06.565480 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:06.565923 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:06.580139 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:19:06.580717 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:06.581486 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:19:06.581533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:06.582288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:19:06.582320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:06.582898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:19:06.582938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:06.583913 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:19:06.583961 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:06.585017 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:06.585057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:06.596055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:19:06.596409 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:19:06.596462 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:06.596838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:06.596889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:06.597522 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:19:06.597611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:19:06.602535 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:19:06.602650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:19:06.603767 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:19:06.609037 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:19:06.614979 systemd[1]: Switching root. Jan 17 00:19:06.649015 systemd-journald[188]: Journal stopped Jan 17 00:19:07.764131 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 17 00:19:07.764228 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:19:07.764250 kernel: SELinux: policy capability open_perms=1 Jan 17 00:19:07.764258 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:19:07.764267 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:19:07.764275 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:19:07.764287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:19:07.764300 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:19:07.764314 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:19:07.764328 kernel: audit: type=1403 audit(1768609146.823:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:19:07.764340 systemd[1]: Successfully loaded SELinux policy in 47.290ms. Jan 17 00:19:07.764355 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.534ms. Jan 17 00:19:07.764367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:07.764378 systemd[1]: Detected virtualization kvm. Jan 17 00:19:07.764387 systemd[1]: Detected architecture x86-64. Jan 17 00:19:07.764397 systemd[1]: Detected first boot. Jan 17 00:19:07.764406 systemd[1]: Hostname set to . Jan 17 00:19:07.764415 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:07.764423 zram_generator::config[1062]: No configuration found. Jan 17 00:19:07.764439 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:19:07.764447 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:19:07.764456 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:19:07.764465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:07.764474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:19:07.764483 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:19:07.764492 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:19:07.764501 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:19:07.764513 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:19:07.764522 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:19:07.764530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:19:07.764539 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:19:07.764548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:07.764557 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:07.764566 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:19:07.764576 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:19:07.764585 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:19:07.764596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:07.764605 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:19:07.764614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:07.764623 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:19:07.764632 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:19:07.764641 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:07.764653 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:19:07.764662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:07.764671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:07.764679 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:07.764688 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:07.764697 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:19:07.764706 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:19:07.764715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:07.764724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:07.764736 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:07.764745 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:19:07.764753 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:19:07.764762 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:19:07.764772 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:19:07.764781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:07.764789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:19:07.764798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:19:07.764806 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:19:07.764818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:19:07.764827 systemd[1]: Reached target machines.target - Containers. Jan 17 00:19:07.764836 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:19:07.764857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:07.764866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:07.764874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:19:07.764883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:07.764892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:07.764903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:07.764912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:19:07.764921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:07.764930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:19:07.764943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:19:07.764961 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:19:07.764970 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:19:07.764981 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:19:07.764991 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:07.765000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:07.765009 kernel: fuse: init (API version 7.39) Jan 17 00:19:07.765017 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:19:07.765026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:19:07.765035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:07.765043 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:19:07.765052 kernel: loop: module loaded Jan 17 00:19:07.765061 systemd[1]: Stopped verity-setup.service. Jan 17 00:19:07.765072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:07.765081 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:19:07.765090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:19:07.765099 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:19:07.765107 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:19:07.765116 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:19:07.765125 kernel: ACPI: bus type drm_connector registered Jan 17 00:19:07.765158 systemd-journald[1134]: Collecting audit messages is disabled. Jan 17 00:19:07.765185 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:19:07.765195 systemd-journald[1134]: Journal started Jan 17 00:19:07.765214 systemd-journald[1134]: Runtime Journal (/run/log/journal/f6baff18ea694b77bd1dcc842800ce54) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:19:07.422250 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:19:07.452717 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:19:07.453312 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:19:07.772866 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:07.771227 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:07.772217 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:19:07.772472 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:19:07.774062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:07.774338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:07.775164 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:07.775375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:07.776171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:07.777105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:07.777739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:19:07.777982 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:19:07.779251 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:07.779399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:07.780140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:07.781179 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:19:07.784907 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:19:07.800292 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:19:07.809453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:19:07.815749 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:19:07.816263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:19:07.816296 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:07.817553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:19:07.824322 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:19:07.831011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:19:07.831537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:07.836920 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:19:07.841968 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:19:07.842380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:07.843808 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:19:07.848255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:07.856005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:07.858045 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:19:07.861993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:19:07.862559 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:19:07.863015 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:19:07.863548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:19:07.870761 systemd-journald[1134]: Time spent on flushing to /var/log/journal/f6baff18ea694b77bd1dcc842800ce54 is 95.410ms for 1181 entries. Jan 17 00:19:07.870761 systemd-journald[1134]: System Journal (/var/log/journal/f6baff18ea694b77bd1dcc842800ce54) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:19:08.006820 systemd-journald[1134]: Received client request to flush runtime journal. Jan 17 00:19:08.006892 kernel: loop0: detected capacity change from 0 to 8 Jan 17 00:19:08.006916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:19:08.006932 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:19:07.884999 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:19:07.893297 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:19:07.895567 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:19:07.905649 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:19:07.938993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:07.947537 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:19:07.983148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:08.011893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:19:08.013213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:19:08.014259 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:19:08.015559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:19:08.016739 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:19:08.030077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:08.055978 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:19:08.076351 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 00:19:08.077888 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 00:19:08.088407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:08.095876 kernel: loop3: detected capacity change from 0 to 229808 Jan 17 00:19:08.146248 kernel: loop4: detected capacity change from 0 to 8 Jan 17 00:19:08.148875 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:19:08.169942 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 00:19:08.192872 kernel: loop7: detected capacity change from 0 to 229808 Jan 17 00:19:08.213795 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:19:08.214424 (sd-merge)[1206]: Merged extensions into '/usr'. Jan 17 00:19:08.218588 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:19:08.218714 systemd[1]: Reloading... Jan 17 00:19:08.310163 zram_generator::config[1232]: No configuration found. Jan 17 00:19:08.362382 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:19:08.418278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:08.455142 systemd[1]: Reloading finished in 234 ms. Jan 17 00:19:08.483261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:19:08.484174 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:19:08.487017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:19:08.495011 systemd[1]: Starting ensure-sysext.service... Jan 17 00:19:08.497043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:08.500009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:08.511773 systemd[1]: Reloading requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:19:08.512028 systemd[1]: Reloading... Jan 17 00:19:08.533807 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:19:08.534144 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:19:08.537558 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:19:08.537856 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 17 00:19:08.537993 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 17 00:19:08.541163 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:08.542592 systemd-tmpfiles[1277]: Skipping /boot Jan 17 00:19:08.561868 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:08.562015 systemd-tmpfiles[1277]: Skipping /boot Jan 17 00:19:08.585627 systemd-udevd[1278]: Using default interface naming scheme 'v255'. Jan 17 00:19:08.609289 zram_generator::config[1316]: No configuration found. Jan 17 00:19:08.725866 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:19:08.744877 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:19:08.755775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:08.792951 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:19:08.799157 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:19:08.799382 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:19:08.799550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:19:08.823689 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:19:08.824528 systemd[1]: Reloading finished in 312 ms. Jan 17 00:19:08.843061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:08.844906 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:08.849928 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1327) Jan 17 00:19:08.865303 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:08.869997 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:08.876980 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:19:08.877470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:08.879019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:08.885045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:08.887552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:08.888401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:08.892089 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:19:08.895865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:08.900649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:08.904367 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:19:08.905912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:08.909732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:19:08.937205 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:19:08.946458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:08.946628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:08.949488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:08.949722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:08.951300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:08.958597 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:19:08.964220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:19:08.977896 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:19:08.978764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:08.979232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:08.981575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:08.982354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:08.983483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:08.984131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:09.001483 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 17 00:19:09.006905 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:19:09.012412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:09.012697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:09.016948 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:19:09.018239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:09.022923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:09.024643 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 17 00:19:09.033048 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:19:09.033113 kernel: [drm] features: -context_init Jan 17 00:19:09.033239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:09.034027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:09.036063 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:19:09.047141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:19:09.047208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:09.048167 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:19:09.048588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:19:09.051915 kernel: [drm] number of scanouts: 1 Jan 17 00:19:09.053372 augenrules[1428]: No rules Jan 17 00:19:09.060577 kernel: [drm] number of cap sets: 0 Jan 17 00:19:09.060600 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:19:09.060110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:09.060294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:09.062456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:09.062954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:09.063090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:09.063500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:09.063620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:09.073933 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:19:09.074020 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:19:09.080075 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:19:09.086951 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:19:09.093078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:09.093288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:09.100147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:09.101515 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:09.106032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:09.110054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:09.111469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:09.111574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:09.112404 systemd[1]: Finished ensure-sysext.service. Jan 17 00:19:09.112812 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:19:09.114425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:09.115904 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:09.132032 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:19:09.133071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:09.133137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:09.133246 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:09.137009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:09.138387 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:09.138835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:09.139040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:09.140874 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:09.141064 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:09.142201 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:09.142349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:09.145069 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:19:09.147872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:09.147956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:09.173415 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:19:09.183081 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:19:09.206869 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:09.238289 systemd-networkd[1395]: lo: Link UP Jan 17 00:19:09.238298 systemd-networkd[1395]: lo: Gained carrier Jan 17 00:19:09.240838 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:19:09.242065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:09.242779 systemd-networkd[1395]: Enumeration completed Jan 17 00:19:09.245079 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:09.245085 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:09.247338 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:09.247348 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:09.247469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:19:09.248197 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:09.248363 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:19:09.248609 systemd-networkd[1395]: eth0: Link UP Jan 17 00:19:09.248614 systemd-networkd[1395]: eth0: Gained carrier Jan 17 00:19:09.248628 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:09.250348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:09.251355 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:19:09.254099 systemd-networkd[1395]: eth1: Link UP Jan 17 00:19:09.254103 systemd-networkd[1395]: eth1: Gained carrier Jan 17 00:19:09.254119 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:09.256810 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:09.264011 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:19:09.271621 systemd-resolved[1396]: Positive Trust Anchors: Jan 17 00:19:09.271641 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:09.271664 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:09.276735 systemd-resolved[1396]: Using system hostname 'ci-4081-3-6-n-9d03cc5a8b'. Jan 17 00:19:09.278607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:09.279331 systemd[1]: Reached target network.target - Network. Jan 17 00:19:09.279734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:09.280140 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:09.280562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:19:09.283720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:19:09.284261 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:19:09.284667 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:19:09.285014 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:19:09.285329 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:19:09.285349 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:09.285656 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:09.288251 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:19:09.290162 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:19:09.291617 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:19:09.291934 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 00:19:09.297953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:19:09.300505 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:19:09.301106 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:19:09.302038 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:09.302738 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:09.303187 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:09.303212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:09.304715 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:19:09.309947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:19:09.311821 systemd-networkd[1395]: eth0: DHCPv4 address 46.62.250.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:19:09.312008 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:19:09.313132 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 00:19:09.316686 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:19:09.328073 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:19:09.329060 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:19:09.332409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:19:09.343806 jq[1480]: false Jan 17 00:19:09.343990 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:19:09.344572 coreos-metadata[1478]: Jan 17 00:19:09.344 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:19:09.351478 extend-filesystems[1483]: Found loop4 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found loop5 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found loop6 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found loop7 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda1 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda2 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda3 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found usr Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda4 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda6 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda7 Jan 17 00:19:09.359280 extend-filesystems[1483]: Found sda9 Jan 17 00:19:09.359280 extend-filesystems[1483]: Checking size of /dev/sda9 Jan 17 00:19:09.355648 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:19:09.381636 coreos-metadata[1478]: Jan 17 00:19:09.354 INFO Fetch successful Jan 17 00:19:09.381636 coreos-metadata[1478]: Jan 17 00:19:09.354 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:19:09.381636 coreos-metadata[1478]: Jan 17 00:19:09.355 INFO Fetch successful Jan 17 00:19:09.368448 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:19:09.379139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:19:09.390020 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:19:09.390751 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:19:09.391969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:19:09.398040 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:19:09.402564 extend-filesystems[1483]: Resized partition /dev/sda9 Jan 17 00:19:09.409494 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:19:09.408237 dbus-daemon[1479]: [system] SELinux support is enabled Jan 17 00:19:09.412970 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:19:09.414124 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:19:09.446929 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 17 00:19:09.427837 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:19:09.428045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:19:09.432346 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:19:09.432516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:19:09.442625 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:19:09.442674 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:19:09.443638 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:19:09.443655 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:19:09.459574 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:19:09.460911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:19:09.464686 jq[1508]: true Jan 17 00:19:09.476158 update_engine[1500]: I20260117 00:19:09.476099 1500 main.cc:92] Flatcar Update Engine starting Jan 17 00:19:09.479531 update_engine[1500]: I20260117 00:19:09.478744 1500 update_check_scheduler.cc:74] Next update check in 12m0s Jan 17 00:19:09.491825 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:19:09.492017 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:19:09.507063 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:19:09.513378 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:19:09.516154 tar[1511]: linux-amd64/LICENSE Jan 17 00:19:09.516154 tar[1511]: linux-amd64/helm Jan 17 00:19:09.558408 jq[1521]: true Jan 17 00:19:09.571565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:19:09.586476 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1325) Jan 17 00:19:09.586005 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:19:09.603078 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:19:09.621781 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:19:09.629485 systemd-logind[1497]: New seat seat0. Jan 17 00:19:09.630788 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 00:19:09.630806 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:19:09.631722 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:19:09.660397 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:19:09.660576 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:19:09.663905 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:19:09.673540 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:19:09.689482 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:19:09.701716 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:19:09.712925 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:19:09.714780 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:19:09.730876 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:09.732933 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:19:09.740091 systemd[1]: Starting sshkeys.service... Jan 17 00:19:09.750818 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:19:09.760727 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:19:09.778030 containerd[1520]: time="2026-01-17T00:19:09.777548094Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:19:09.783241 coreos-metadata[1577]: Jan 17 00:19:09.782 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:19:09.784139 coreos-metadata[1577]: Jan 17 00:19:09.783 INFO Fetch successful Jan 17 00:19:09.788444 unknown[1577]: wrote ssh authorized keys file for user: core Jan 17 00:19:09.810986 containerd[1520]: time="2026-01-17T00:19:09.810009841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.813064 containerd[1520]: time="2026-01-17T00:19:09.813010014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:09.813064 containerd[1520]: time="2026-01-17T00:19:09.813036784Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:19:09.813613 containerd[1520]: time="2026-01-17T00:19:09.813050004Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.813727735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.813755325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.813817485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.813826575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.814036625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.814048455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.814057955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814076 containerd[1520]: time="2026-01-17T00:19:09.814064795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814217 containerd[1520]: time="2026-01-17T00:19:09.814140865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814356 containerd[1520]: time="2026-01-17T00:19:09.814335745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814573 containerd[1520]: time="2026-01-17T00:19:09.814465475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:09.814573 containerd[1520]: time="2026-01-17T00:19:09.814478585Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:19:09.814573 containerd[1520]: time="2026-01-17T00:19:09.814550575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:19:09.814620 containerd[1520]: time="2026-01-17T00:19:09.814598105Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:19:09.824884 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 17 00:19:09.847341 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:19:09.847341 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 17 00:19:09.847341 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 17 00:19:09.852406 extend-filesystems[1483]: Resized filesystem in /dev/sda9 Jan 17 00:19:09.852406 extend-filesystems[1483]: Found sr0 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.850897936Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.850948206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.850962326Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.850982806Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.850994376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851137546Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851303836Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851389816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851401176Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851411766Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851423466Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851432966Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851442216Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858018 containerd[1520]: time="2026-01-17T00:19:09.851452316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.848709 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851462766Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851472476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851482046Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851490786Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851511016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851521176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851530326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851543856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851552706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851561716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851570316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851580016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851589166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858439 containerd[1520]: time="2026-01-17T00:19:09.851600836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.848916 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851609576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851617466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851625946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851637626Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851652976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851661586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851669056Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851708996Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851722836Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851730246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851738506Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851745216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851756736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:19:09.858660 containerd[1520]: time="2026-01-17T00:19:09.851769506Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:19:09.858833 containerd[1520]: time="2026-01-17T00:19:09.851777426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.853615788Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.853742818Z" level=info msg="Connect containerd service" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.853775378Z" level=info msg="using legacy CRI server" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.853781398Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.854103388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.855609979Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.855882520Z" level=info msg="Start subscribing containerd event" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.855927740Z" level=info msg="Start recovering state" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.855988440Z" level=info msg="Start event monitor" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.855995860Z" level=info msg="Start snapshots syncer" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.856002640Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.856013840Z" level=info msg="Start streaming server" Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.858254242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:19:09.859047 containerd[1520]: time="2026-01-17T00:19:09.858353012Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:19:09.859548 containerd[1520]: time="2026-01-17T00:19:09.859519533Z" level=info msg="containerd successfully booted in 0.083421s" Jan 17 00:19:09.859679 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:19:09.859893 update-ssh-keys[1583]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:09.862753 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:19:09.866250 systemd[1]: Finished sshkeys.service. Jan 17 00:19:10.096966 tar[1511]: linux-amd64/README.md Jan 17 00:19:10.111619 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:19:10.762227 systemd-networkd[1395]: eth1: Gained IPv6LL Jan 17 00:19:10.763657 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 00:19:10.768379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:19:10.777469 systemd[1]: Started sshd@0-46.62.250.181:22-20.161.92.111:47426.service - OpenSSH per-connection server daemon (20.161.92.111:47426). Jan 17 00:19:10.782270 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:19:10.787513 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:19:10.801939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:10.810250 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:19:10.860271 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:19:10.954573 systemd-networkd[1395]: eth0: Gained IPv6LL Jan 17 00:19:10.955421 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 00:19:11.556631 sshd[1594]: Accepted publickey for core from 20.161.92.111 port 47426 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:11.559653 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:11.578651 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:19:11.591474 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:19:11.600764 systemd-logind[1497]: New session 1 of user core. Jan 17 00:19:11.627173 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:19:11.641465 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:19:11.660111 (systemd)[1609]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:19:11.797070 systemd[1609]: Queued start job for default target default.target. Jan 17 00:19:11.805279 systemd[1609]: Created slice app.slice - User Application Slice. Jan 17 00:19:11.805303 systemd[1609]: Reached target paths.target - Paths. Jan 17 00:19:11.805314 systemd[1609]: Reached target timers.target - Timers. Jan 17 00:19:11.808997 systemd[1609]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:19:11.818442 systemd[1609]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:19:11.819112 systemd[1609]: Reached target sockets.target - Sockets. Jan 17 00:19:11.819594 systemd[1609]: Reached target basic.target - Basic System. Jan 17 00:19:11.819680 systemd[1609]: Reached target default.target - Main User Target. Jan 17 00:19:11.819766 systemd[1609]: Startup finished in 148ms. Jan 17 00:19:11.820064 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:19:11.835167 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:19:12.176305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:12.182832 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:19:12.185350 systemd[1]: Startup finished in 1.473s (kernel) + 11.054s (initrd) + 5.407s (userspace) = 17.935s. Jan 17 00:19:12.191754 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:12.386186 systemd[1]: Started sshd@1-46.62.250.181:22-20.161.92.111:38790.service - OpenSSH per-connection server daemon (20.161.92.111:38790). Jan 17 00:19:12.857326 kubelet[1624]: E0117 00:19:12.857210 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:12.863502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:12.863961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:12.864549 systemd[1]: kubelet.service: Consumed 1.538s CPU time. Jan 17 00:19:13.138118 sshd[1634]: Accepted publickey for core from 20.161.92.111 port 38790 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:13.141013 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:13.149719 systemd-logind[1497]: New session 2 of user core. Jan 17 00:19:13.159122 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:19:13.676234 sshd[1634]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:13.681014 systemd[1]: sshd@1-46.62.250.181:22-20.161.92.111:38790.service: Deactivated successfully. Jan 17 00:19:13.684532 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:19:13.686714 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:19:13.688478 systemd-logind[1497]: Removed session 2. Jan 17 00:19:13.817312 systemd[1]: Started sshd@2-46.62.250.181:22-20.161.92.111:38792.service - OpenSSH per-connection server daemon (20.161.92.111:38792). Jan 17 00:19:14.589094 sshd[1643]: Accepted publickey for core from 20.161.92.111 port 38792 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:14.592324 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:14.599639 systemd-logind[1497]: New session 3 of user core. Jan 17 00:19:14.610117 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:19:15.119778 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:15.126163 systemd[1]: sshd@2-46.62.250.181:22-20.161.92.111:38792.service: Deactivated successfully. Jan 17 00:19:15.129463 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:19:15.130701 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:19:15.132474 systemd-logind[1497]: Removed session 3. Jan 17 00:19:15.257268 systemd[1]: Started sshd@3-46.62.250.181:22-20.161.92.111:38796.service - OpenSSH per-connection server daemon (20.161.92.111:38796). Jan 17 00:19:16.031823 sshd[1650]: Accepted publickey for core from 20.161.92.111 port 38796 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:16.034656 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:16.042794 systemd-logind[1497]: New session 4 of user core. Jan 17 00:19:16.055116 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:19:16.568753 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:16.575552 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:19:16.577172 systemd[1]: sshd@3-46.62.250.181:22-20.161.92.111:38796.service: Deactivated successfully. Jan 17 00:19:16.580196 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:19:16.581524 systemd-logind[1497]: Removed session 4. Jan 17 00:19:16.709332 systemd[1]: Started sshd@4-46.62.250.181:22-20.161.92.111:38812.service - OpenSSH per-connection server daemon (20.161.92.111:38812). Jan 17 00:19:17.468977 sshd[1657]: Accepted publickey for core from 20.161.92.111 port 38812 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:17.472245 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:17.480594 systemd-logind[1497]: New session 5 of user core. Jan 17 00:19:17.490257 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:19:17.895162 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:19:17.895870 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:17.917262 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:18.040489 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:18.045794 systemd[1]: sshd@4-46.62.250.181:22-20.161.92.111:38812.service: Deactivated successfully. Jan 17 00:19:18.049474 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:19:18.052143 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:19:18.053768 systemd-logind[1497]: Removed session 5. Jan 17 00:19:18.178218 systemd[1]: Started sshd@5-46.62.250.181:22-20.161.92.111:38814.service - OpenSSH per-connection server daemon (20.161.92.111:38814). Jan 17 00:19:18.946717 sshd[1665]: Accepted publickey for core from 20.161.92.111 port 38814 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:18.949507 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:18.956945 systemd-logind[1497]: New session 6 of user core. Jan 17 00:19:18.968119 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:19:19.363339 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:19:19.364047 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:19.370601 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:19.382599 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:19:19.383324 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:19.404227 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:19.418406 auditctl[1672]: No rules Jan 17 00:19:19.419260 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:19:19.419693 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:19.427537 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:19.487271 augenrules[1690]: No rules Jan 17 00:19:19.489958 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:19.492294 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:19.615626 sshd[1665]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:19.620627 systemd[1]: sshd@5-46.62.250.181:22-20.161.92.111:38814.service: Deactivated successfully. Jan 17 00:19:19.624335 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:19:19.626767 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:19:19.628663 systemd-logind[1497]: Removed session 6. Jan 17 00:19:19.755273 systemd[1]: Started sshd@6-46.62.250.181:22-20.161.92.111:38824.service - OpenSSH per-connection server daemon (20.161.92.111:38824). Jan 17 00:19:20.515892 sshd[1698]: Accepted publickey for core from 20.161.92.111 port 38824 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:20.518671 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:20.526992 systemd-logind[1497]: New session 7 of user core. Jan 17 00:19:20.539129 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:19:20.932016 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:19:20.932795 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:21.380251 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:19:21.391627 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:19:21.832002 dockerd[1717]: time="2026-01-17T00:19:21.831894657Z" level=info msg="Starting up" Jan 17 00:19:21.997296 dockerd[1717]: time="2026-01-17T00:19:21.996950884Z" level=info msg="Loading containers: start." Jan 17 00:19:22.188077 kernel: Initializing XFRM netlink socket Jan 17 00:19:22.235208 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 00:19:22.333363 systemd-networkd[1395]: docker0: Link UP Jan 17 00:19:22.361891 dockerd[1717]: time="2026-01-17T00:19:22.361834988Z" level=info msg="Loading containers: done." Jan 17 00:19:22.380879 dockerd[1717]: time="2026-01-17T00:19:22.380614634Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:19:22.380879 dockerd[1717]: time="2026-01-17T00:19:22.380719084Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:19:22.380879 dockerd[1717]: time="2026-01-17T00:19:22.380827434Z" level=info msg="Daemon has completed initialization" Jan 17 00:19:22.900248 systemd-resolved[1396]: Clock change detected. Flushing caches. Jan 17 00:19:22.900636 systemd-timesyncd[1451]: Contacted time server 80.153.195.191:123 (2.flatcar.pool.ntp.org). Jan 17 00:19:22.900733 systemd-timesyncd[1451]: Initial clock synchronization to Sat 2026-01-17 00:19:22.900016 UTC. Jan 17 00:19:22.911105 dockerd[1717]: time="2026-01-17T00:19:22.911046664Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:19:22.911390 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:19:23.480684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:23.487658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:23.655395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:23.659352 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:23.693497 kubelet[1863]: E0117 00:19:23.693407 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:23.701391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:23.701573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:24.260411 containerd[1520]: time="2026-01-17T00:19:24.260349518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:19:24.865583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013626557.mount: Deactivated successfully. Jan 17 00:19:26.534507 containerd[1520]: time="2026-01-17T00:19:26.534426652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:26.535545 containerd[1520]: time="2026-01-17T00:19:26.535409063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Jan 17 00:19:26.537562 containerd[1520]: time="2026-01-17T00:19:26.536371974Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:26.538680 containerd[1520]: time="2026-01-17T00:19:26.538406486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:26.539472 containerd[1520]: time="2026-01-17T00:19:26.539092246Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.278695748s" Jan 17 00:19:26.539472 containerd[1520]: time="2026-01-17T00:19:26.539116106Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:19:26.539653 containerd[1520]: time="2026-01-17T00:19:26.539629537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:19:28.200603 containerd[1520]: time="2026-01-17T00:19:28.200552460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:28.201521 containerd[1520]: time="2026-01-17T00:19:28.201489141Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Jan 17 00:19:28.202647 containerd[1520]: time="2026-01-17T00:19:28.202627742Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:28.204767 containerd[1520]: time="2026-01-17T00:19:28.204748754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:28.205542 containerd[1520]: time="2026-01-17T00:19:28.205400994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.665737037s" Jan 17 00:19:28.205542 containerd[1520]: time="2026-01-17T00:19:28.205427894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:19:28.206085 containerd[1520]: time="2026-01-17T00:19:28.205845345Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:19:29.445753 containerd[1520]: time="2026-01-17T00:19:29.445702227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:29.446925 containerd[1520]: time="2026-01-17T00:19:29.446763568Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Jan 17 00:19:29.448015 containerd[1520]: time="2026-01-17T00:19:29.447835589Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:29.449956 containerd[1520]: time="2026-01-17T00:19:29.449939131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:29.450645 containerd[1520]: time="2026-01-17T00:19:29.450627432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.244763227s" Jan 17 00:19:29.450706 containerd[1520]: time="2026-01-17T00:19:29.450696462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:19:29.451164 containerd[1520]: time="2026-01-17T00:19:29.451133892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:19:30.755045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365097419.mount: Deactivated successfully. Jan 17 00:19:31.186028 containerd[1520]: time="2026-01-17T00:19:31.185753487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:31.187615 containerd[1520]: time="2026-01-17T00:19:31.187420908Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Jan 17 00:19:31.189129 containerd[1520]: time="2026-01-17T00:19:31.188362099Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:31.190240 containerd[1520]: time="2026-01-17T00:19:31.190014061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:31.190831 containerd[1520]: time="2026-01-17T00:19:31.190405201Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.739248539s" Jan 17 00:19:31.190831 containerd[1520]: time="2026-01-17T00:19:31.190439121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:19:31.191144 containerd[1520]: time="2026-01-17T00:19:31.191130562Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:19:31.715591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760905693.mount: Deactivated successfully. Jan 17 00:19:32.623639 containerd[1520]: time="2026-01-17T00:19:32.623554695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:32.625239 containerd[1520]: time="2026-01-17T00:19:32.624949886Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jan 17 00:19:32.626291 containerd[1520]: time="2026-01-17T00:19:32.626256067Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:32.632105 containerd[1520]: time="2026-01-17T00:19:32.632068232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:32.633835 containerd[1520]: time="2026-01-17T00:19:32.633294773Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.442083611s" Jan 17 00:19:32.633835 containerd[1520]: time="2026-01-17T00:19:32.633349483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:19:32.633933 containerd[1520]: time="2026-01-17T00:19:32.633907853Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:19:33.122124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189751110.mount: Deactivated successfully. Jan 17 00:19:33.127870 containerd[1520]: time="2026-01-17T00:19:33.127731955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:33.129105 containerd[1520]: time="2026-01-17T00:19:33.129029796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 17 00:19:33.131248 containerd[1520]: time="2026-01-17T00:19:33.129935517Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:33.133948 containerd[1520]: time="2026-01-17T00:19:33.133875700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:33.135341 containerd[1520]: time="2026-01-17T00:19:33.135287151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.345377ms" Jan 17 00:19:33.135539 containerd[1520]: time="2026-01-17T00:19:33.135511251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:19:33.136351 containerd[1520]: time="2026-01-17T00:19:33.136285222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:19:33.670529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766819225.mount: Deactivated successfully. Jan 17 00:19:33.731020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:19:33.742342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:33.988823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:33.989561 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:34.022163 kubelet[2020]: E0117 00:19:34.022125 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:34.025274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:34.025444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:35.160970 containerd[1520]: time="2026-01-17T00:19:35.160922179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:35.161862 containerd[1520]: time="2026-01-17T00:19:35.161755559Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Jan 17 00:19:35.163516 containerd[1520]: time="2026-01-17T00:19:35.162664780Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:35.164517 containerd[1520]: time="2026-01-17T00:19:35.164488862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:35.165312 containerd[1520]: time="2026-01-17T00:19:35.165288692Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.02895886s" Jan 17 00:19:35.165345 containerd[1520]: time="2026-01-17T00:19:35.165314912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:19:40.264077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:40.272967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:40.299885 systemd[1]: Reloading requested from client PID 2098 ('systemctl') (unit session-7.scope)... Jan 17 00:19:40.299903 systemd[1]: Reloading... Jan 17 00:19:40.419257 zram_generator::config[2141]: No configuration found. Jan 17 00:19:40.503789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:40.565538 systemd[1]: Reloading finished in 265 ms. Jan 17 00:19:40.605966 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:19:40.606050 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:19:40.606557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:40.613410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:40.761102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:40.765504 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:40.795381 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:40.795381 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:40.795381 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:40.796413 kubelet[2191]: I0117 00:19:40.796327 2191 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:41.074392 kubelet[2191]: I0117 00:19:41.074339 2191 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:19:41.074392 kubelet[2191]: I0117 00:19:41.074383 2191 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:41.074738 kubelet[2191]: I0117 00:19:41.074713 2191 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:41.096683 kubelet[2191]: E0117 00:19:41.096635 2191 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.250.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:41.097241 kubelet[2191]: I0117 00:19:41.097115 2191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:41.103070 kubelet[2191]: E0117 00:19:41.103027 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:41.103114 kubelet[2191]: I0117 00:19:41.103072 2191 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:41.110366 kubelet[2191]: I0117 00:19:41.110317 2191 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:19:41.110596 kubelet[2191]: I0117 00:19:41.110569 2191 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:41.110730 kubelet[2191]: I0117 00:19:41.110588 2191 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-9d03cc5a8b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:41.110730 kubelet[2191]: I0117 00:19:41.110719 2191 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:41.110730 kubelet[2191]: I0117 00:19:41.110726 2191 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:19:41.111453 kubelet[2191]: I0117 00:19:41.111423 2191 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:41.113255 kubelet[2191]: I0117 00:19:41.113191 2191 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:19:41.113255 kubelet[2191]: I0117 00:19:41.113221 2191 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:41.113255 kubelet[2191]: I0117 00:19:41.113240 2191 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:19:41.113255 kubelet[2191]: I0117 00:19:41.113250 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:41.122524 kubelet[2191]: E0117 00:19:41.122435 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.250.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-9d03cc5a8b&limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:41.122733 kubelet[2191]: I0117 00:19:41.122681 2191 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:41.124156 kubelet[2191]: I0117 00:19:41.123833 2191 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:41.125397 kubelet[2191]: W0117 00:19:41.125356 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:19:41.134851 kubelet[2191]: E0117 00:19:41.134819 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.250.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:41.137775 kubelet[2191]: I0117 00:19:41.137742 2191 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:19:41.137858 kubelet[2191]: I0117 00:19:41.137838 2191 server.go:1289] "Started kubelet" Jan 17 00:19:41.140248 kubelet[2191]: I0117 00:19:41.139186 2191 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:41.140248 kubelet[2191]: I0117 00:19:41.139879 2191 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:19:41.144748 kubelet[2191]: I0117 00:19:41.144704 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:41.147262 kubelet[2191]: I0117 00:19:41.146449 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:41.147262 kubelet[2191]: I0117 00:19:41.146828 2191 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:41.149184 kubelet[2191]: E0117 00:19:41.147077 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.250.181:6443/api/v1/namespaces/default/events\": dial tcp 46.62.250.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-9d03cc5a8b.188b5cae043651f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-9d03cc5a8b,UID:ci-4081-3-6-n-9d03cc5a8b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-9d03cc5a8b,},FirstTimestamp:2026-01-17 00:19:41.137785328 +0000 UTC m=+0.369050959,LastTimestamp:2026-01-17 00:19:41.137785328 +0000 UTC m=+0.369050959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-9d03cc5a8b,}" Jan 17 00:19:41.150096 kubelet[2191]: I0117 00:19:41.150068 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:41.152422 kubelet[2191]: I0117 00:19:41.152333 2191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:41.155743 kubelet[2191]: I0117 00:19:41.155711 2191 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:19:41.155925 kubelet[2191]: E0117 00:19:41.155894 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:41.156159 kubelet[2191]: I0117 00:19:41.156130 2191 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:19:41.156776 kubelet[2191]: I0117 00:19:41.156744 2191 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:19:41.156888 kubelet[2191]: E0117 00:19:41.156862 2191 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:41.157043 kubelet[2191]: E0117 00:19:41.157013 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.250.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9d03cc5a8b?timeout=10s\": dial tcp 46.62.250.181:6443: connect: connection refused" interval="200ms" Jan 17 00:19:41.157206 kubelet[2191]: I0117 00:19:41.157176 2191 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:41.158315 kubelet[2191]: I0117 00:19:41.158291 2191 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:41.158315 kubelet[2191]: I0117 00:19:41.158304 2191 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:41.179771 kubelet[2191]: E0117 00:19:41.179723 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.250.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:41.185738 kubelet[2191]: I0117 00:19:41.184696 2191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:41.185738 kubelet[2191]: I0117 00:19:41.184730 2191 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:19:41.185738 kubelet[2191]: I0117 00:19:41.184759 2191 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:41.185738 kubelet[2191]: I0117 00:19:41.184771 2191 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:19:41.185738 kubelet[2191]: E0117 00:19:41.184845 2191 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:41.185738 kubelet[2191]: E0117 00:19:41.185632 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.250.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:41.188100 kubelet[2191]: I0117 00:19:41.188065 2191 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:41.188100 kubelet[2191]: I0117 00:19:41.188082 2191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:41.188100 kubelet[2191]: I0117 00:19:41.188107 2191 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:41.192300 kubelet[2191]: I0117 00:19:41.192269 2191 policy_none.go:49] "None policy: Start" Jan 17 00:19:41.192300 kubelet[2191]: I0117 00:19:41.192288 2191 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:19:41.192300 kubelet[2191]: I0117 00:19:41.192297 2191 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:19:41.200436 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:19:41.214794 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:19:41.220412 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:19:41.228774 kubelet[2191]: E0117 00:19:41.228738 2191 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:41.229027 kubelet[2191]: I0117 00:19:41.228924 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:41.229027 kubelet[2191]: I0117 00:19:41.228936 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:41.229330 kubelet[2191]: I0117 00:19:41.229298 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:41.230513 kubelet[2191]: E0117 00:19:41.230489 2191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:41.231492 kubelet[2191]: E0117 00:19:41.230526 2191 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:41.304706 systemd[1]: Created slice kubepods-burstable-pod64e04ecaef7511b3cae71e6f0c888a69.slice - libcontainer container kubepods-burstable-pod64e04ecaef7511b3cae71e6f0c888a69.slice. Jan 17 00:19:41.314292 kubelet[2191]: E0117 00:19:41.314026 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.319034 systemd[1]: Created slice kubepods-burstable-poda30811914ccf212d5bd374a66fe9846b.slice - libcontainer container kubepods-burstable-poda30811914ccf212d5bd374a66fe9846b.slice. Jan 17 00:19:41.325292 kubelet[2191]: E0117 00:19:41.325018 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.330012 systemd[1]: Created slice kubepods-burstable-pod0ad47e16ec78025bbb12a794cd3b3cd9.slice - libcontainer container kubepods-burstable-pod0ad47e16ec78025bbb12a794cd3b3cd9.slice. Jan 17 00:19:41.333812 kubelet[2191]: I0117 00:19:41.333501 2191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.333812 kubelet[2191]: E0117 00:19:41.333760 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.250.181:6443/api/v1/nodes\": dial tcp 46.62.250.181:6443: connect: connection refused" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.334800 kubelet[2191]: E0117 00:19:41.334760 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358308 kubelet[2191]: I0117 00:19:41.358117 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358308 kubelet[2191]: I0117 00:19:41.358174 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358308 kubelet[2191]: I0117 00:19:41.358207 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358308 kubelet[2191]: E0117 00:19:41.358246 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.250.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9d03cc5a8b?timeout=10s\": dial tcp 46.62.250.181:6443: connect: connection refused" interval="400ms" Jan 17 00:19:41.358308 kubelet[2191]: I0117 00:19:41.358265 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358506 kubelet[2191]: I0117 00:19:41.358294 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358506 kubelet[2191]: I0117 00:19:41.358356 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358506 kubelet[2191]: I0117 00:19:41.358385 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358506 kubelet[2191]: I0117 00:19:41.358410 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.358506 kubelet[2191]: I0117 00:19:41.358442 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ad47e16ec78025bbb12a794cd3b3cd9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"0ad47e16ec78025bbb12a794cd3b3cd9\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.537394 kubelet[2191]: I0117 00:19:41.537091 2191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.537994 kubelet[2191]: E0117 00:19:41.537924 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.250.181:6443/api/v1/nodes\": dial tcp 46.62.250.181:6443: connect: connection refused" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.616704 containerd[1520]: time="2026-01-17T00:19:41.616482727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-9d03cc5a8b,Uid:64e04ecaef7511b3cae71e6f0c888a69,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:41.626661 containerd[1520]: time="2026-01-17T00:19:41.626493145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b,Uid:a30811914ccf212d5bd374a66fe9846b,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:41.635853 containerd[1520]: time="2026-01-17T00:19:41.635792793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-9d03cc5a8b,Uid:0ad47e16ec78025bbb12a794cd3b3cd9,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:41.759775 kubelet[2191]: E0117 00:19:41.759712 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.250.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9d03cc5a8b?timeout=10s\": dial tcp 46.62.250.181:6443: connect: connection refused" interval="800ms" Jan 17 00:19:41.941177 kubelet[2191]: I0117 00:19:41.940988 2191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:41.941829 kubelet[2191]: E0117 00:19:41.941527 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.250.181:6443/api/v1/nodes\": dial tcp 46.62.250.181:6443: connect: connection refused" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:42.053147 kubelet[2191]: E0117 00:19:42.053064 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.250.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-9d03cc5a8b&limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:42.089236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165197128.mount: Deactivated successfully. Jan 17 00:19:42.097702 containerd[1520]: time="2026-01-17T00:19:42.097623687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:42.100098 containerd[1520]: time="2026-01-17T00:19:42.099704619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:42.101088 containerd[1520]: time="2026-01-17T00:19:42.100993470Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:42.102410 containerd[1520]: time="2026-01-17T00:19:42.102304011Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:42.103609 containerd[1520]: time="2026-01-17T00:19:42.103481282Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:42.105256 containerd[1520]: time="2026-01-17T00:19:42.104709853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:42.106406 containerd[1520]: time="2026-01-17T00:19:42.106344125Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 17 00:19:42.117262 containerd[1520]: time="2026-01-17T00:19:42.117143484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:42.118905 containerd[1520]: time="2026-01-17T00:19:42.118615035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.001548ms" Jan 17 00:19:42.123151 containerd[1520]: time="2026-01-17T00:19:42.123088389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.200396ms" Jan 17 00:19:42.125095 containerd[1520]: time="2026-01-17T00:19:42.125027670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.420935ms" Jan 17 00:19:42.296382 containerd[1520]: time="2026-01-17T00:19:42.295754632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:42.296382 containerd[1520]: time="2026-01-17T00:19:42.295871903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:42.296382 containerd[1520]: time="2026-01-17T00:19:42.295899713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.296382 containerd[1520]: time="2026-01-17T00:19:42.296130873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.307359 containerd[1520]: time="2026-01-17T00:19:42.305990801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:42.307359 containerd[1520]: time="2026-01-17T00:19:42.307315842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:42.307534 containerd[1520]: time="2026-01-17T00:19:42.307382852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.311259 containerd[1520]: time="2026-01-17T00:19:42.311041645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:42.311362 containerd[1520]: time="2026-01-17T00:19:42.311192565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.312893 containerd[1520]: time="2026-01-17T00:19:42.312777207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:42.312893 containerd[1520]: time="2026-01-17T00:19:42.312801387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.313087 containerd[1520]: time="2026-01-17T00:19:42.313024487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:42.330381 systemd[1]: Started cri-containerd-db0a091ad5ccf313493ea096bbb9c5987fbb30ebbd49093311485704d449cd62.scope - libcontainer container db0a091ad5ccf313493ea096bbb9c5987fbb30ebbd49093311485704d449cd62. Jan 17 00:19:42.344621 systemd[1]: Started cri-containerd-ff6886cca7473d7d2b521a6a9a9f925faf7babb0300a0c8b2a5faaecb7eb6a9e.scope - libcontainer container ff6886cca7473d7d2b521a6a9a9f925faf7babb0300a0c8b2a5faaecb7eb6a9e. Jan 17 00:19:42.349609 systemd[1]: Started cri-containerd-c7bc3ed56ba63377c906a4aff3119468e665ba0ec4e39ab55196b70f4d5d7f08.scope - libcontainer container c7bc3ed56ba63377c906a4aff3119468e665ba0ec4e39ab55196b70f4d5d7f08. Jan 17 00:19:42.373130 kubelet[2191]: E0117 00:19:42.373075 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.250.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:42.394835 containerd[1520]: time="2026-01-17T00:19:42.394792025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-9d03cc5a8b,Uid:0ad47e16ec78025bbb12a794cd3b3cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6886cca7473d7d2b521a6a9a9f925faf7babb0300a0c8b2a5faaecb7eb6a9e\"" Jan 17 00:19:42.397937 kubelet[2191]: E0117 00:19:42.397779 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.250.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:42.398202 containerd[1520]: time="2026-01-17T00:19:42.398171798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-9d03cc5a8b,Uid:64e04ecaef7511b3cae71e6f0c888a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"db0a091ad5ccf313493ea096bbb9c5987fbb30ebbd49093311485704d449cd62\"" Jan 17 00:19:42.404101 containerd[1520]: time="2026-01-17T00:19:42.403707702Z" level=info msg="CreateContainer within sandbox \"ff6886cca7473d7d2b521a6a9a9f925faf7babb0300a0c8b2a5faaecb7eb6a9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:19:42.405076 containerd[1520]: time="2026-01-17T00:19:42.405051124Z" level=info msg="CreateContainer within sandbox \"db0a091ad5ccf313493ea096bbb9c5987fbb30ebbd49093311485704d449cd62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:19:42.413272 containerd[1520]: time="2026-01-17T00:19:42.413223170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b,Uid:a30811914ccf212d5bd374a66fe9846b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7bc3ed56ba63377c906a4aff3119468e665ba0ec4e39ab55196b70f4d5d7f08\"" Jan 17 00:19:42.417233 kubelet[2191]: E0117 00:19:42.417147 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.250.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.250.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:42.418630 containerd[1520]: time="2026-01-17T00:19:42.418546675Z" level=info msg="CreateContainer within sandbox \"c7bc3ed56ba63377c906a4aff3119468e665ba0ec4e39ab55196b70f4d5d7f08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:19:42.421777 containerd[1520]: time="2026-01-17T00:19:42.421635877Z" level=info msg="CreateContainer within sandbox \"ff6886cca7473d7d2b521a6a9a9f925faf7babb0300a0c8b2a5faaecb7eb6a9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4e478567759390bc2b85929bb6b2af3fd47e03ea855d1f6bfca8a7ea75f0e60\"" Jan 17 00:19:42.422434 containerd[1520]: time="2026-01-17T00:19:42.422407378Z" level=info msg="StartContainer for \"c4e478567759390bc2b85929bb6b2af3fd47e03ea855d1f6bfca8a7ea75f0e60\"" Jan 17 00:19:42.424631 containerd[1520]: time="2026-01-17T00:19:42.424562520Z" level=info msg="CreateContainer within sandbox \"db0a091ad5ccf313493ea096bbb9c5987fbb30ebbd49093311485704d449cd62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"340341acad604ada87aa55e9bcce49afbc7cc3698f5f3b17c9459e55de3eea4a\"" Jan 17 00:19:42.424886 containerd[1520]: time="2026-01-17T00:19:42.424873080Z" level=info msg="StartContainer for \"340341acad604ada87aa55e9bcce49afbc7cc3698f5f3b17c9459e55de3eea4a\"" Jan 17 00:19:42.439178 containerd[1520]: time="2026-01-17T00:19:42.439069282Z" level=info msg="CreateContainer within sandbox \"c7bc3ed56ba63377c906a4aff3119468e665ba0ec4e39ab55196b70f4d5d7f08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b3468057463ba7d53bbe8024efbc3761083e605b6303eeab67266a56c0b8320d\"" Jan 17 00:19:42.439761 containerd[1520]: time="2026-01-17T00:19:42.439659532Z" level=info msg="StartContainer for \"b3468057463ba7d53bbe8024efbc3761083e605b6303eeab67266a56c0b8320d\"" Jan 17 00:19:42.451381 systemd[1]: Started cri-containerd-c4e478567759390bc2b85929bb6b2af3fd47e03ea855d1f6bfca8a7ea75f0e60.scope - libcontainer container c4e478567759390bc2b85929bb6b2af3fd47e03ea855d1f6bfca8a7ea75f0e60. Jan 17 00:19:42.463370 systemd[1]: Started cri-containerd-340341acad604ada87aa55e9bcce49afbc7cc3698f5f3b17c9459e55de3eea4a.scope - libcontainer container 340341acad604ada87aa55e9bcce49afbc7cc3698f5f3b17c9459e55de3eea4a. Jan 17 00:19:42.477647 systemd[1]: Started cri-containerd-b3468057463ba7d53bbe8024efbc3761083e605b6303eeab67266a56c0b8320d.scope - libcontainer container b3468057463ba7d53bbe8024efbc3761083e605b6303eeab67266a56c0b8320d. Jan 17 00:19:42.504526 containerd[1520]: time="2026-01-17T00:19:42.504479756Z" level=info msg="StartContainer for \"c4e478567759390bc2b85929bb6b2af3fd47e03ea855d1f6bfca8a7ea75f0e60\" returns successfully" Jan 17 00:19:42.525233 containerd[1520]: time="2026-01-17T00:19:42.524154163Z" level=info msg="StartContainer for \"340341acad604ada87aa55e9bcce49afbc7cc3698f5f3b17c9459e55de3eea4a\" returns successfully" Jan 17 00:19:42.551039 containerd[1520]: time="2026-01-17T00:19:42.550684565Z" level=info msg="StartContainer for \"b3468057463ba7d53bbe8024efbc3761083e605b6303eeab67266a56c0b8320d\" returns successfully" Jan 17 00:19:42.560946 kubelet[2191]: E0117 00:19:42.560896 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.250.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9d03cc5a8b?timeout=10s\": dial tcp 46.62.250.181:6443: connect: connection refused" interval="1.6s" Jan 17 00:19:42.744901 kubelet[2191]: I0117 00:19:42.744876 2191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:43.198159 kubelet[2191]: E0117 00:19:43.198115 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:43.203262 kubelet[2191]: E0117 00:19:43.202555 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:43.203865 kubelet[2191]: E0117 00:19:43.203839 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:43.923456 kubelet[2191]: I0117 00:19:43.923324 2191 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:43.923456 kubelet[2191]: E0117 00:19:43.923412 2191 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-9d03cc5a8b\": node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:43.942944 kubelet[2191]: E0117 00:19:43.942892 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.043560 kubelet[2191]: E0117 00:19:44.043518 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.144138 kubelet[2191]: E0117 00:19:44.144068 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.208448 kubelet[2191]: E0117 00:19:44.207758 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:44.209056 kubelet[2191]: E0117 00:19:44.209013 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:44.244880 kubelet[2191]: E0117 00:19:44.244815 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.345715 kubelet[2191]: E0117 00:19:44.345645 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.446777 kubelet[2191]: E0117 00:19:44.446692 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.547686 kubelet[2191]: E0117 00:19:44.547611 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.648069 kubelet[2191]: E0117 00:19:44.647996 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.748689 kubelet[2191]: E0117 00:19:44.748633 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9d03cc5a8b\" not found" Jan 17 00:19:44.858282 kubelet[2191]: I0117 00:19:44.857712 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:44.868609 kubelet[2191]: I0117 00:19:44.868473 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:44.873050 kubelet[2191]: I0117 00:19:44.873034 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:45.134991 kubelet[2191]: I0117 00:19:45.134557 2191 apiserver.go:52] "Watching apiserver" Jan 17 00:19:45.156541 kubelet[2191]: I0117 00:19:45.156461 2191 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:19:45.744466 systemd[1]: Reloading requested from client PID 2479 ('systemctl') (unit session-7.scope)... Jan 17 00:19:45.744494 systemd[1]: Reloading... Jan 17 00:19:45.907250 zram_generator::config[2522]: No configuration found. Jan 17 00:19:45.996872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:46.068631 systemd[1]: Reloading finished in 323 ms. Jan 17 00:19:46.120116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:46.144384 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:19:46.144617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:46.150653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:46.286107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:46.290045 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:46.335495 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:46.336129 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:46.336274 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:46.336358 kubelet[2570]: I0117 00:19:46.336334 2570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:46.340704 kubelet[2570]: I0117 00:19:46.340685 2570 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:19:46.340777 kubelet[2570]: I0117 00:19:46.340770 2570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:46.340945 kubelet[2570]: I0117 00:19:46.340936 2570 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:46.341786 kubelet[2570]: I0117 00:19:46.341773 2570 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:19:46.343982 kubelet[2570]: I0117 00:19:46.343971 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:46.349089 kubelet[2570]: E0117 00:19:46.349068 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:46.349234 kubelet[2570]: I0117 00:19:46.349205 2570 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:46.356172 kubelet[2570]: I0117 00:19:46.356151 2570 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:19:46.356514 kubelet[2570]: I0117 00:19:46.356492 2570 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:46.357947 kubelet[2570]: I0117 00:19:46.357827 2570 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-9d03cc5a8b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:46.358049 kubelet[2570]: I0117 00:19:46.358041 2570 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:46.358094 kubelet[2570]: I0117 00:19:46.358088 2570 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:19:46.358157 kubelet[2570]: I0117 00:19:46.358151 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:46.358342 kubelet[2570]: I0117 00:19:46.358335 2570 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:19:46.358986 kubelet[2570]: I0117 00:19:46.358976 2570 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:46.359054 kubelet[2570]: I0117 00:19:46.359048 2570 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:19:46.361239 kubelet[2570]: I0117 00:19:46.359089 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:46.362458 kubelet[2570]: I0117 00:19:46.362446 2570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:46.362892 kubelet[2570]: I0117 00:19:46.362882 2570 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:46.366867 kubelet[2570]: I0117 00:19:46.366855 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:19:46.366965 kubelet[2570]: I0117 00:19:46.366958 2570 server.go:1289] "Started kubelet" Jan 17 00:19:46.370302 kubelet[2570]: I0117 00:19:46.370281 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:46.370957 kubelet[2570]: I0117 00:19:46.370945 2570 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:19:46.377631 kubelet[2570]: I0117 00:19:46.368059 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:46.382849 kubelet[2570]: I0117 00:19:46.368126 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:46.387245 kubelet[2570]: I0117 00:19:46.386422 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:46.387245 kubelet[2570]: I0117 00:19:46.386613 2570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:46.387245 kubelet[2570]: I0117 00:19:46.386651 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:19:46.387245 kubelet[2570]: I0117 00:19:46.386714 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:19:46.387245 kubelet[2570]: I0117 00:19:46.386792 2570 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:19:46.388204 kubelet[2570]: I0117 00:19:46.388186 2570 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:46.389011 kubelet[2570]: E0117 00:19:46.388996 2570 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:46.390838 kubelet[2570]: I0117 00:19:46.389779 2570 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:46.390913 kubelet[2570]: I0117 00:19:46.390902 2570 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:46.407393 kubelet[2570]: I0117 00:19:46.407337 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:46.410142 kubelet[2570]: I0117 00:19:46.410109 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:46.410142 kubelet[2570]: I0117 00:19:46.410139 2570 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:19:46.410301 kubelet[2570]: I0117 00:19:46.410165 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:46.410301 kubelet[2570]: I0117 00:19:46.410176 2570 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:19:46.410301 kubelet[2570]: E0117 00:19:46.410266 2570 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:46.436524 kubelet[2570]: I0117 00:19:46.436423 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:46.436651 kubelet[2570]: I0117 00:19:46.436547 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:46.436651 kubelet[2570]: I0117 00:19:46.436573 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:46.436790 kubelet[2570]: I0117 00:19:46.436768 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:19:46.436809 kubelet[2570]: I0117 00:19:46.436787 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:19:46.436809 kubelet[2570]: I0117 00:19:46.436806 2570 policy_none.go:49] "None policy: Start" Jan 17 00:19:46.436841 kubelet[2570]: I0117 00:19:46.436819 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:19:46.436841 kubelet[2570]: I0117 00:19:46.436833 2570 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:19:46.436948 kubelet[2570]: I0117 00:19:46.436932 2570 state_mem.go:75] "Updated machine memory state" Jan 17 00:19:46.442777 kubelet[2570]: E0117 00:19:46.442743 2570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:46.443225 kubelet[2570]: I0117 00:19:46.442966 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:46.443225 kubelet[2570]: I0117 00:19:46.442986 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:46.443225 kubelet[2570]: I0117 00:19:46.443167 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:46.446311 kubelet[2570]: E0117 00:19:46.446286 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:46.513274 kubelet[2570]: I0117 00:19:46.512330 2570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.513274 kubelet[2570]: I0117 00:19:46.512415 2570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.513274 kubelet[2570]: I0117 00:19:46.512761 2570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.522138 kubelet[2570]: E0117 00:19:46.522080 2570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.523647 kubelet[2570]: E0117 00:19:46.523407 2570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-9d03cc5a8b\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.524304 kubelet[2570]: E0117 00:19:46.524255 2570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.552978 kubelet[2570]: I0117 00:19:46.552821 2570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.565868 kubelet[2570]: I0117 00:19:46.565797 2570 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.566037 kubelet[2570]: I0117 00:19:46.565919 2570 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588562 kubelet[2570]: I0117 00:19:46.588475 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588562 kubelet[2570]: I0117 00:19:46.588528 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588562 kubelet[2570]: I0117 00:19:46.588557 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588829 kubelet[2570]: I0117 00:19:46.588600 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588829 kubelet[2570]: I0117 00:19:46.588624 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588829 kubelet[2570]: I0117 00:19:46.588659 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588829 kubelet[2570]: I0117 00:19:46.588683 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a30811914ccf212d5bd374a66fe9846b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"a30811914ccf212d5bd374a66fe9846b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.588829 kubelet[2570]: I0117 00:19:46.588705 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ad47e16ec78025bbb12a794cd3b3cd9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"0ad47e16ec78025bbb12a794cd3b3cd9\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.589164 kubelet[2570]: I0117 00:19:46.588727 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64e04ecaef7511b3cae71e6f0c888a69-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" (UID: \"64e04ecaef7511b3cae71e6f0c888a69\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:46.752107 sudo[2608]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:19:46.752875 sudo[2608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:19:47.286435 sudo[2608]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:47.363379 kubelet[2570]: I0117 00:19:47.363319 2570 apiserver.go:52] "Watching apiserver" Jan 17 00:19:47.387575 kubelet[2570]: I0117 00:19:47.387517 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:19:47.437697 kubelet[2570]: I0117 00:19:47.437650 2570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:47.453866 kubelet[2570]: E0117 00:19:47.453734 2570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-9d03cc5a8b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" Jan 17 00:19:47.475231 kubelet[2570]: I0117 00:19:47.474702 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9d03cc5a8b" podStartSLOduration=3.474686287 podStartE2EDuration="3.474686287s" podCreationTimestamp="2026-01-17 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:19:47.46616659 +0000 UTC m=+1.172536768" watchObservedRunningTime="2026-01-17 00:19:47.474686287 +0000 UTC m=+1.181056465" Jan 17 00:19:47.482309 kubelet[2570]: I0117 00:19:47.482256 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9d03cc5a8b" podStartSLOduration=3.482241503 podStartE2EDuration="3.482241503s" podCreationTimestamp="2026-01-17 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:19:47.475128047 +0000 UTC m=+1.181498225" watchObservedRunningTime="2026-01-17 00:19:47.482241503 +0000 UTC m=+1.188611681" Jan 17 00:19:47.494225 kubelet[2570]: I0117 00:19:47.491719 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9d03cc5a8b" podStartSLOduration=3.491705371 podStartE2EDuration="3.491705371s" podCreationTimestamp="2026-01-17 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:19:47.482447933 +0000 UTC m=+1.188818111" watchObservedRunningTime="2026-01-17 00:19:47.491705371 +0000 UTC m=+1.198075549" Jan 17 00:19:49.037570 sudo[1701]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:49.160850 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:49.168724 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:19:49.168803 systemd[1]: sshd@6-46.62.250.181:22-20.161.92.111:38824.service: Deactivated successfully. Jan 17 00:19:49.175522 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:19:49.175984 systemd[1]: session-7.scope: Consumed 7.506s CPU time, 158.7M memory peak, 0B memory swap peak. Jan 17 00:19:49.177910 systemd-logind[1497]: Removed session 7. Jan 17 00:19:52.449817 kubelet[2570]: I0117 00:19:52.449777 2570 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:19:52.450580 containerd[1520]: time="2026-01-17T00:19:52.450457712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:19:52.451253 kubelet[2570]: I0117 00:19:52.450916 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:19:53.491061 systemd[1]: Created slice kubepods-besteffort-pode9ed8c3b_fdc5_470a_87d0_54a967104c5e.slice - libcontainer container kubepods-besteffort-pode9ed8c3b_fdc5_470a_87d0_54a967104c5e.slice. Jan 17 00:19:53.521380 systemd[1]: Created slice kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice - libcontainer container kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice. Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532454 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-kernel\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532487 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9ed8c3b-fdc5-470a-87d0-54a967104c5e-lib-modules\") pod \"kube-proxy-ht2vm\" (UID: \"e9ed8c3b-fdc5-470a-87d0-54a967104c5e\") " pod="kube-system/kube-proxy-ht2vm" Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532500 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-run\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532510 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-bpf-maps\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532519 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cni-path\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.532525 kubelet[2570]: I0117 00:19:53.532531 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9ed8c3b-fdc5-470a-87d0-54a967104c5e-xtables-lock\") pod \"kube-proxy-ht2vm\" (UID: \"e9ed8c3b-fdc5-470a-87d0-54a967104c5e\") " pod="kube-system/kube-proxy-ht2vm" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532541 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-hostproc\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532553 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f1c98e-013a-4c46-b67f-e4940d22534d-clustermesh-secrets\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532583 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-net\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532594 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-hubble-tls\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532611 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9ed8c3b-fdc5-470a-87d0-54a967104c5e-kube-proxy\") pod \"kube-proxy-ht2vm\" (UID: \"e9ed8c3b-fdc5-470a-87d0-54a967104c5e\") " pod="kube-system/kube-proxy-ht2vm" Jan 17 00:19:53.533550 kubelet[2570]: I0117 00:19:53.532662 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-lib-modules\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533752 kubelet[2570]: I0117 00:19:53.532729 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-config-path\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533752 kubelet[2570]: I0117 00:19:53.532754 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hcnp\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-kube-api-access-7hcnp\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533752 kubelet[2570]: I0117 00:19:53.532819 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7b88\" (UniqueName: \"kubernetes.io/projected/e9ed8c3b-fdc5-470a-87d0-54a967104c5e-kube-api-access-z7b88\") pod \"kube-proxy-ht2vm\" (UID: \"e9ed8c3b-fdc5-470a-87d0-54a967104c5e\") " pod="kube-system/kube-proxy-ht2vm" Jan 17 00:19:53.533752 kubelet[2570]: I0117 00:19:53.532856 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-cgroup\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533752 kubelet[2570]: I0117 00:19:53.532865 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-etc-cni-netd\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.533913 kubelet[2570]: I0117 00:19:53.532876 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-xtables-lock\") pod \"cilium-r5lq5\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " pod="kube-system/cilium-r5lq5" Jan 17 00:19:53.591318 systemd[1]: Created slice kubepods-besteffort-pod556258f1_69a5_46e0_83a7_afb9985e4a03.slice - libcontainer container kubepods-besteffort-pod556258f1_69a5_46e0_83a7_afb9985e4a03.slice. Jan 17 00:19:53.633690 kubelet[2570]: I0117 00:19:53.633628 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/556258f1-69a5-46e0-83a7-afb9985e4a03-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vcw9w\" (UID: \"556258f1-69a5-46e0-83a7-afb9985e4a03\") " pod="kube-system/cilium-operator-6c4d7847fc-vcw9w" Jan 17 00:19:53.633861 kubelet[2570]: I0117 00:19:53.633751 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjg4q\" (UniqueName: \"kubernetes.io/projected/556258f1-69a5-46e0-83a7-afb9985e4a03-kube-api-access-cjg4q\") pod \"cilium-operator-6c4d7847fc-vcw9w\" (UID: \"556258f1-69a5-46e0-83a7-afb9985e4a03\") " pod="kube-system/cilium-operator-6c4d7847fc-vcw9w" Jan 17 00:19:53.813012 containerd[1520]: time="2026-01-17T00:19:53.812828637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht2vm,Uid:e9ed8c3b-fdc5-470a-87d0-54a967104c5e,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:53.828440 containerd[1520]: time="2026-01-17T00:19:53.827932850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5lq5,Uid:b4f1c98e-013a-4c46-b67f-e4940d22534d,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:53.851719 containerd[1520]: time="2026-01-17T00:19:53.851576669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:53.851881 containerd[1520]: time="2026-01-17T00:19:53.851781239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:53.851881 containerd[1520]: time="2026-01-17T00:19:53.851837069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.852094 containerd[1520]: time="2026-01-17T00:19:53.852031450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.874238 containerd[1520]: time="2026-01-17T00:19:53.873923598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:53.877878 containerd[1520]: time="2026-01-17T00:19:53.877301781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:53.877878 containerd[1520]: time="2026-01-17T00:19:53.877335301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.877878 containerd[1520]: time="2026-01-17T00:19:53.877630151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.890462 systemd[1]: Started cri-containerd-9a269213201a0207818967541c202eff4f63d2f145dd2d2cb357d9f657e914b7.scope - libcontainer container 9a269213201a0207818967541c202eff4f63d2f145dd2d2cb357d9f657e914b7. Jan 17 00:19:53.895877 containerd[1520]: time="2026-01-17T00:19:53.895828236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vcw9w,Uid:556258f1-69a5-46e0-83a7-afb9985e4a03,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:53.921459 systemd[1]: Started cri-containerd-4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1.scope - libcontainer container 4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1. Jan 17 00:19:53.994242 containerd[1520]: time="2026-01-17T00:19:53.986824342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:53.994242 containerd[1520]: time="2026-01-17T00:19:53.986893882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:53.994242 containerd[1520]: time="2026-01-17T00:19:53.987032122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.994242 containerd[1520]: time="2026-01-17T00:19:53.987177212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:53.995265 containerd[1520]: time="2026-01-17T00:19:53.995175459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5lq5,Uid:b4f1c98e-013a-4c46-b67f-e4940d22534d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\"" Jan 17 00:19:54.000826 containerd[1520]: time="2026-01-17T00:19:54.000519543Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:19:54.003103 containerd[1520]: time="2026-01-17T00:19:54.003066895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht2vm,Uid:e9ed8c3b-fdc5-470a-87d0-54a967104c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a269213201a0207818967541c202eff4f63d2f145dd2d2cb357d9f657e914b7\"" Jan 17 00:19:54.013037 containerd[1520]: time="2026-01-17T00:19:54.012996454Z" level=info msg="CreateContainer within sandbox \"9a269213201a0207818967541c202eff4f63d2f145dd2d2cb357d9f657e914b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:19:54.024436 systemd[1]: Started cri-containerd-59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7.scope - libcontainer container 59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7. Jan 17 00:19:54.037891 containerd[1520]: time="2026-01-17T00:19:54.037816174Z" level=info msg="CreateContainer within sandbox \"9a269213201a0207818967541c202eff4f63d2f145dd2d2cb357d9f657e914b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cf7408d17dcd0cd0f178291c0c1ec3d9453294d4513050ee88c0863504329d8\"" Jan 17 00:19:54.039492 containerd[1520]: time="2026-01-17T00:19:54.039300556Z" level=info msg="StartContainer for \"8cf7408d17dcd0cd0f178291c0c1ec3d9453294d4513050ee88c0863504329d8\"" Jan 17 00:19:54.077342 systemd[1]: Started cri-containerd-8cf7408d17dcd0cd0f178291c0c1ec3d9453294d4513050ee88c0863504329d8.scope - libcontainer container 8cf7408d17dcd0cd0f178291c0c1ec3d9453294d4513050ee88c0863504329d8. Jan 17 00:19:54.078783 containerd[1520]: time="2026-01-17T00:19:54.078552298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vcw9w,Uid:556258f1-69a5-46e0-83a7-afb9985e4a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\"" Jan 17 00:19:54.102136 containerd[1520]: time="2026-01-17T00:19:54.102099768Z" level=info msg="StartContainer for \"8cf7408d17dcd0cd0f178291c0c1ec3d9453294d4513050ee88c0863504329d8\" returns successfully" Jan 17 00:19:54.464305 kubelet[2570]: I0117 00:19:54.462756 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ht2vm" podStartSLOduration=1.462733548 podStartE2EDuration="1.462733548s" podCreationTimestamp="2026-01-17 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:19:54.462401518 +0000 UTC m=+8.168771736" watchObservedRunningTime="2026-01-17 00:19:54.462733548 +0000 UTC m=+8.169103766" Jan 17 00:19:54.739460 update_engine[1500]: I20260117 00:19:54.739362 1500 update_attempter.cc:509] Updating boot flags... Jan 17 00:19:54.829324 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2945) Jan 17 00:19:54.914424 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2947) Jan 17 00:19:54.957311 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2947) Jan 17 00:19:58.900664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957169692.mount: Deactivated successfully. Jan 17 00:20:00.181645 containerd[1520]: time="2026-01-17T00:20:00.181582599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:00.182575 containerd[1520]: time="2026-01-17T00:20:00.182475402Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:20:00.183413 containerd[1520]: time="2026-01-17T00:20:00.183244057Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:00.184300 containerd[1520]: time="2026-01-17T00:20:00.184282699Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.183725496s" Jan 17 00:20:00.184416 containerd[1520]: time="2026-01-17T00:20:00.184348999Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:20:00.185496 containerd[1520]: time="2026-01-17T00:20:00.185414491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:20:00.187699 containerd[1520]: time="2026-01-17T00:20:00.187676005Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:20:00.204412 containerd[1520]: time="2026-01-17T00:20:00.204370786Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\"" Jan 17 00:20:00.205479 containerd[1520]: time="2026-01-17T00:20:00.205449407Z" level=info msg="StartContainer for \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\"" Jan 17 00:20:00.235335 systemd[1]: Started cri-containerd-2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17.scope - libcontainer container 2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17. Jan 17 00:20:00.254727 containerd[1520]: time="2026-01-17T00:20:00.254691074Z" level=info msg="StartContainer for \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\" returns successfully" Jan 17 00:20:00.266146 systemd[1]: cri-containerd-2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17.scope: Deactivated successfully. Jan 17 00:20:00.389420 containerd[1520]: time="2026-01-17T00:20:00.389068311Z" level=info msg="shim disconnected" id=2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17 namespace=k8s.io Jan 17 00:20:00.389420 containerd[1520]: time="2026-01-17T00:20:00.389130941Z" level=warning msg="cleaning up after shim disconnected" id=2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17 namespace=k8s.io Jan 17 00:20:00.389420 containerd[1520]: time="2026-01-17T00:20:00.389144001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:00.477631 containerd[1520]: time="2026-01-17T00:20:00.477479867Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:20:00.503717 containerd[1520]: time="2026-01-17T00:20:00.503611731Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\"" Jan 17 00:20:00.506321 containerd[1520]: time="2026-01-17T00:20:00.505530586Z" level=info msg="StartContainer for \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\"" Jan 17 00:20:00.552466 systemd[1]: Started cri-containerd-3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194.scope - libcontainer container 3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194. Jan 17 00:20:00.618497 containerd[1520]: time="2026-01-17T00:20:00.618394627Z" level=info msg="StartContainer for \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\" returns successfully" Jan 17 00:20:00.643595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:20:00.644093 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:00.644332 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:20:00.655784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:20:00.656156 systemd[1]: cri-containerd-3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194.scope: Deactivated successfully. Jan 17 00:20:00.690264 containerd[1520]: time="2026-01-17T00:20:00.690010994Z" level=info msg="shim disconnected" id=3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194 namespace=k8s.io Jan 17 00:20:00.690264 containerd[1520]: time="2026-01-17T00:20:00.690067854Z" level=warning msg="cleaning up after shim disconnected" id=3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194 namespace=k8s.io Jan 17 00:20:00.690264 containerd[1520]: time="2026-01-17T00:20:00.690074514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:00.696736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:00.708927 containerd[1520]: time="2026-01-17T00:20:00.708783460Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:20:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:20:01.196438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17-rootfs.mount: Deactivated successfully. Jan 17 00:20:01.480580 containerd[1520]: time="2026-01-17T00:20:01.480350780Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:20:01.511540 containerd[1520]: time="2026-01-17T00:20:01.511347593Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\"" Jan 17 00:20:01.515087 containerd[1520]: time="2026-01-17T00:20:01.512921223Z" level=info msg="StartContainer for \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\"" Jan 17 00:20:01.577454 systemd[1]: Started cri-containerd-cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e.scope - libcontainer container cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e. Jan 17 00:20:01.633398 containerd[1520]: time="2026-01-17T00:20:01.633311130Z" level=info msg="StartContainer for \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\" returns successfully" Jan 17 00:20:01.643390 systemd[1]: cri-containerd-cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e.scope: Deactivated successfully. Jan 17 00:20:01.684457 containerd[1520]: time="2026-01-17T00:20:01.684141911Z" level=info msg="shim disconnected" id=cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e namespace=k8s.io Jan 17 00:20:01.684457 containerd[1520]: time="2026-01-17T00:20:01.684206860Z" level=warning msg="cleaning up after shim disconnected" id=cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e namespace=k8s.io Jan 17 00:20:01.684457 containerd[1520]: time="2026-01-17T00:20:01.684260920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:02.197572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e-rootfs.mount: Deactivated successfully. Jan 17 00:20:02.495455 containerd[1520]: time="2026-01-17T00:20:02.495113785Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:20:02.550358 containerd[1520]: time="2026-01-17T00:20:02.550294292Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\"" Jan 17 00:20:02.551108 containerd[1520]: time="2026-01-17T00:20:02.550924798Z" level=info msg="StartContainer for \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\"" Jan 17 00:20:02.600556 systemd[1]: Started cri-containerd-910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04.scope - libcontainer container 910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04. Jan 17 00:20:02.633402 systemd[1]: cri-containerd-910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04.scope: Deactivated successfully. Jan 17 00:20:02.638490 containerd[1520]: time="2026-01-17T00:20:02.638439386Z" level=info msg="StartContainer for \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\" returns successfully" Jan 17 00:20:02.641003 containerd[1520]: time="2026-01-17T00:20:02.637371532Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice/cri-containerd-910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04.scope/memory.events\": no such file or directory" Jan 17 00:20:02.670917 containerd[1520]: time="2026-01-17T00:20:02.670866935Z" level=info msg="shim disconnected" id=910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04 namespace=k8s.io Jan 17 00:20:02.671227 containerd[1520]: time="2026-01-17T00:20:02.671157024Z" level=warning msg="cleaning up after shim disconnected" id=910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04 namespace=k8s.io Jan 17 00:20:02.671227 containerd[1520]: time="2026-01-17T00:20:02.671167224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:02.684245 containerd[1520]: time="2026-01-17T00:20:02.684039173Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:20:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:20:02.957865 containerd[1520]: time="2026-01-17T00:20:02.957507098Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:02.958389 containerd[1520]: time="2026-01-17T00:20:02.958342923Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:20:02.959237 containerd[1520]: time="2026-01-17T00:20:02.959068968Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:02.960394 containerd[1520]: time="2026-01-17T00:20:02.960005003Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.774571442s" Jan 17 00:20:02.960394 containerd[1520]: time="2026-01-17T00:20:02.960034812Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:20:02.963589 containerd[1520]: time="2026-01-17T00:20:02.963556731Z" level=info msg="CreateContainer within sandbox \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:20:02.973510 containerd[1520]: time="2026-01-17T00:20:02.973420219Z" level=info msg="CreateContainer within sandbox \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\"" Jan 17 00:20:02.974249 containerd[1520]: time="2026-01-17T00:20:02.974173745Z" level=info msg="StartContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\"" Jan 17 00:20:02.996357 systemd[1]: Started cri-containerd-a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86.scope - libcontainer container a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86. Jan 17 00:20:03.017882 containerd[1520]: time="2026-01-17T00:20:03.017839701Z" level=info msg="StartContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" returns successfully" Jan 17 00:20:03.197184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416855081.mount: Deactivated successfully. Jan 17 00:20:03.199330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04-rootfs.mount: Deactivated successfully. Jan 17 00:20:03.507409 kubelet[2570]: I0117 00:20:03.505801 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vcw9w" podStartSLOduration=1.625128361 podStartE2EDuration="10.505782432s" podCreationTimestamp="2026-01-17 00:19:53 +0000 UTC" firstStartedPulling="2026-01-17 00:19:54.079829929 +0000 UTC m=+7.786200107" lastFinishedPulling="2026-01-17 00:20:02.96048399 +0000 UTC m=+16.666854178" observedRunningTime="2026-01-17 00:20:03.504157632 +0000 UTC m=+17.210527850" watchObservedRunningTime="2026-01-17 00:20:03.505782432 +0000 UTC m=+17.212152650" Jan 17 00:20:03.512190 containerd[1520]: time="2026-01-17T00:20:03.512099745Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:20:03.556095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907141949.mount: Deactivated successfully. Jan 17 00:20:03.560191 containerd[1520]: time="2026-01-17T00:20:03.560132859Z" level=info msg="CreateContainer within sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\"" Jan 17 00:20:03.561408 containerd[1520]: time="2026-01-17T00:20:03.561362101Z" level=info msg="StartContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\"" Jan 17 00:20:03.630286 systemd[1]: Started cri-containerd-9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8.scope - libcontainer container 9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8. Jan 17 00:20:03.705753 containerd[1520]: time="2026-01-17T00:20:03.705656900Z" level=info msg="StartContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" returns successfully" Jan 17 00:20:03.880455 kubelet[2570]: I0117 00:20:03.880035 2570 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:20:03.910796 systemd[1]: Created slice kubepods-burstable-podb221c8ba_9f8e_4e6c_86fe_2d0b3f3c7efc.slice - libcontainer container kubepods-burstable-podb221c8ba_9f8e_4e6c_86fe_2d0b3f3c7efc.slice. Jan 17 00:20:03.922447 systemd[1]: Created slice kubepods-burstable-podd71a858e_7706_40fb_8fec_5d61fc44e6c1.slice - libcontainer container kubepods-burstable-podd71a858e_7706_40fb_8fec_5d61fc44e6c1.slice. Jan 17 00:20:04.012550 kubelet[2570]: I0117 00:20:04.012482 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc-config-volume\") pod \"coredns-674b8bbfcf-d6dmd\" (UID: \"b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc\") " pod="kube-system/coredns-674b8bbfcf-d6dmd" Jan 17 00:20:04.012550 kubelet[2570]: I0117 00:20:04.012558 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbzf\" (UniqueName: \"kubernetes.io/projected/b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc-kube-api-access-gbbzf\") pod \"coredns-674b8bbfcf-d6dmd\" (UID: \"b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc\") " pod="kube-system/coredns-674b8bbfcf-d6dmd" Jan 17 00:20:04.012856 kubelet[2570]: I0117 00:20:04.012576 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d71a858e-7706-40fb-8fec-5d61fc44e6c1-config-volume\") pod \"coredns-674b8bbfcf-94t7l\" (UID: \"d71a858e-7706-40fb-8fec-5d61fc44e6c1\") " pod="kube-system/coredns-674b8bbfcf-94t7l" Jan 17 00:20:04.012856 kubelet[2570]: I0117 00:20:04.012589 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rjrw\" (UniqueName: \"kubernetes.io/projected/d71a858e-7706-40fb-8fec-5d61fc44e6c1-kube-api-access-8rjrw\") pod \"coredns-674b8bbfcf-94t7l\" (UID: \"d71a858e-7706-40fb-8fec-5d61fc44e6c1\") " pod="kube-system/coredns-674b8bbfcf-94t7l" Jan 17 00:20:04.218507 containerd[1520]: time="2026-01-17T00:20:04.218407798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d6dmd,Uid:b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:04.227730 containerd[1520]: time="2026-01-17T00:20:04.227486729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-94t7l,Uid:d71a858e-7706-40fb-8fec-5d61fc44e6c1,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:04.531581 kubelet[2570]: I0117 00:20:04.531482 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r5lq5" podStartSLOduration=5.345982983 podStartE2EDuration="11.531459593s" podCreationTimestamp="2026-01-17 00:19:53 +0000 UTC" firstStartedPulling="2026-01-17 00:19:53.999570783 +0000 UTC m=+7.705940991" lastFinishedPulling="2026-01-17 00:20:00.185047423 +0000 UTC m=+13.891417601" observedRunningTime="2026-01-17 00:20:04.530345349 +0000 UTC m=+18.236715567" watchObservedRunningTime="2026-01-17 00:20:04.531459593 +0000 UTC m=+18.237829811" Jan 17 00:20:06.734517 systemd-networkd[1395]: cilium_host: Link UP Jan 17 00:20:06.734913 systemd-networkd[1395]: cilium_net: Link UP Jan 17 00:20:06.734922 systemd-networkd[1395]: cilium_net: Gained carrier Jan 17 00:20:06.738652 systemd-networkd[1395]: cilium_host: Gained carrier Jan 17 00:20:06.966586 systemd-networkd[1395]: cilium_vxlan: Link UP Jan 17 00:20:06.966612 systemd-networkd[1395]: cilium_vxlan: Gained carrier Jan 17 00:20:07.225299 kernel: NET: Registered PF_ALG protocol family Jan 17 00:20:07.378489 systemd-networkd[1395]: cilium_host: Gained IPv6LL Jan 17 00:20:07.507448 systemd-networkd[1395]: cilium_net: Gained IPv6LL Jan 17 00:20:08.079483 systemd-networkd[1395]: lxc_health: Link UP Jan 17 00:20:08.095917 systemd-networkd[1395]: lxc_health: Gained carrier Jan 17 00:20:08.146372 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Jan 17 00:20:08.270808 systemd-networkd[1395]: lxc2ccf81dccda2: Link UP Jan 17 00:20:08.276200 kernel: eth0: renamed from tmp64c5d Jan 17 00:20:08.287053 systemd-networkd[1395]: lxc63c80e3b180b: Link UP Jan 17 00:20:08.296278 kernel: eth0: renamed from tmp9ca17 Jan 17 00:20:08.307745 systemd-networkd[1395]: lxc2ccf81dccda2: Gained carrier Jan 17 00:20:08.308088 systemd-networkd[1395]: lxc63c80e3b180b: Gained carrier Jan 17 00:20:09.490526 systemd-networkd[1395]: lxc2ccf81dccda2: Gained IPv6LL Jan 17 00:20:09.874398 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jan 17 00:20:10.002368 systemd-networkd[1395]: lxc63c80e3b180b: Gained IPv6LL Jan 17 00:20:10.715750 containerd[1520]: time="2026-01-17T00:20:10.714665101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:10.715750 containerd[1520]: time="2026-01-17T00:20:10.714734161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:10.715750 containerd[1520]: time="2026-01-17T00:20:10.714755071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:10.715750 containerd[1520]: time="2026-01-17T00:20:10.714819311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:10.739336 systemd[1]: Started cri-containerd-64c5d09063c732879586a3897d81c44fd0e3decdff1e7e8222da32d47eaaadee.scope - libcontainer container 64c5d09063c732879586a3897d81c44fd0e3decdff1e7e8222da32d47eaaadee. Jan 17 00:20:10.777276 containerd[1520]: time="2026-01-17T00:20:10.777071581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:10.777389 containerd[1520]: time="2026-01-17T00:20:10.777126801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:10.777389 containerd[1520]: time="2026-01-17T00:20:10.777281591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:10.777467 containerd[1520]: time="2026-01-17T00:20:10.777443971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:10.798341 systemd[1]: Started cri-containerd-9ca17f983e00b46716b9c24f8116258592968fbe42253efba4b431d4bf88ce3a.scope - libcontainer container 9ca17f983e00b46716b9c24f8116258592968fbe42253efba4b431d4bf88ce3a. Jan 17 00:20:10.800553 containerd[1520]: time="2026-01-17T00:20:10.800517243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d6dmd,Uid:b221c8ba-9f8e-4e6c-86fe-2d0b3f3c7efc,Namespace:kube-system,Attempt:0,} returns sandbox id \"64c5d09063c732879586a3897d81c44fd0e3decdff1e7e8222da32d47eaaadee\"" Jan 17 00:20:10.805548 containerd[1520]: time="2026-01-17T00:20:10.805360546Z" level=info msg="CreateContainer within sandbox \"64c5d09063c732879586a3897d81c44fd0e3decdff1e7e8222da32d47eaaadee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:10.820895 containerd[1520]: time="2026-01-17T00:20:10.820871004Z" level=info msg="CreateContainer within sandbox \"64c5d09063c732879586a3897d81c44fd0e3decdff1e7e8222da32d47eaaadee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c0ac902d48b1cf6d20ba58eea229986d2089bc10f849a91b6343f543d420fb4\"" Jan 17 00:20:10.822242 containerd[1520]: time="2026-01-17T00:20:10.821635062Z" level=info msg="StartContainer for \"2c0ac902d48b1cf6d20ba58eea229986d2089bc10f849a91b6343f543d420fb4\"" Jan 17 00:20:10.857371 containerd[1520]: time="2026-01-17T00:20:10.857080792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-94t7l,Uid:d71a858e-7706-40fb-8fec-5d61fc44e6c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ca17f983e00b46716b9c24f8116258592968fbe42253efba4b431d4bf88ce3a\"" Jan 17 00:20:10.867472 containerd[1520]: time="2026-01-17T00:20:10.867416298Z" level=info msg="CreateContainer within sandbox \"9ca17f983e00b46716b9c24f8116258592968fbe42253efba4b431d4bf88ce3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:10.870334 systemd[1]: Started cri-containerd-2c0ac902d48b1cf6d20ba58eea229986d2089bc10f849a91b6343f543d420fb4.scope - libcontainer container 2c0ac902d48b1cf6d20ba58eea229986d2089bc10f849a91b6343f543d420fb4. Jan 17 00:20:10.885169 containerd[1520]: time="2026-01-17T00:20:10.885144388Z" level=info msg="CreateContainer within sandbox \"9ca17f983e00b46716b9c24f8116258592968fbe42253efba4b431d4bf88ce3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a1cb1637d0d7fd4f31eff2e7a47d73860d4ad6ad989ded2ec37a2e07661cad2\"" Jan 17 00:20:10.886477 containerd[1520]: time="2026-01-17T00:20:10.886375674Z" level=info msg="StartContainer for \"6a1cb1637d0d7fd4f31eff2e7a47d73860d4ad6ad989ded2ec37a2e07661cad2\"" Jan 17 00:20:10.915822 containerd[1520]: time="2026-01-17T00:20:10.915788635Z" level=info msg="StartContainer for \"2c0ac902d48b1cf6d20ba58eea229986d2089bc10f849a91b6343f543d420fb4\" returns successfully" Jan 17 00:20:10.925333 systemd[1]: Started cri-containerd-6a1cb1637d0d7fd4f31eff2e7a47d73860d4ad6ad989ded2ec37a2e07661cad2.scope - libcontainer container 6a1cb1637d0d7fd4f31eff2e7a47d73860d4ad6ad989ded2ec37a2e07661cad2. Jan 17 00:20:10.954383 containerd[1520]: time="2026-01-17T00:20:10.953903337Z" level=info msg="StartContainer for \"6a1cb1637d0d7fd4f31eff2e7a47d73860d4ad6ad989ded2ec37a2e07661cad2\" returns successfully" Jan 17 00:20:11.550343 kubelet[2570]: I0117 00:20:11.550139 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-94t7l" podStartSLOduration=18.550119626 podStartE2EDuration="18.550119626s" podCreationTimestamp="2026-01-17 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:11.549954527 +0000 UTC m=+25.256324745" watchObservedRunningTime="2026-01-17 00:20:11.550119626 +0000 UTC m=+25.256489844" Jan 17 00:20:11.573681 kubelet[2570]: I0117 00:20:11.571308 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d6dmd" podStartSLOduration=18.571287831 podStartE2EDuration="18.571287831s" podCreationTimestamp="2026-01-17 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:11.565542508 +0000 UTC m=+25.271912726" watchObservedRunningTime="2026-01-17 00:20:11.571287831 +0000 UTC m=+25.277658049" Jan 17 00:20:11.727162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879679499.mount: Deactivated successfully. Jan 17 00:21:21.700621 systemd[1]: Started sshd@7-46.62.250.181:22-20.161.92.111:60818.service - OpenSSH per-connection server daemon (20.161.92.111:60818). Jan 17 00:21:22.468236 sshd[3971]: Accepted publickey for core from 20.161.92.111 port 60818 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:22.470541 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:22.477048 systemd-logind[1497]: New session 8 of user core. Jan 17 00:21:22.483459 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:23.063876 sshd[3971]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:23.068895 systemd[1]: sshd@7-46.62.250.181:22-20.161.92.111:60818.service: Deactivated successfully. Jan 17 00:21:23.072400 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:23.073724 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:23.074824 systemd-logind[1497]: Removed session 8. Jan 17 00:21:28.211643 systemd[1]: Started sshd@8-46.62.250.181:22-20.161.92.111:53740.service - OpenSSH per-connection server daemon (20.161.92.111:53740). Jan 17 00:21:28.975764 sshd[3987]: Accepted publickey for core from 20.161.92.111 port 53740 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:28.977201 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:28.984992 systemd-logind[1497]: New session 9 of user core. Jan 17 00:21:28.992436 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:21:29.600416 sshd[3987]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:29.606708 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:21:29.608383 systemd[1]: sshd@8-46.62.250.181:22-20.161.92.111:53740.service: Deactivated successfully. Jan 17 00:21:29.612203 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:21:29.614201 systemd-logind[1497]: Removed session 9. Jan 17 00:21:34.741700 systemd[1]: Started sshd@9-46.62.250.181:22-20.161.92.111:42444.service - OpenSSH per-connection server daemon (20.161.92.111:42444). Jan 17 00:21:35.514203 sshd[4000]: Accepted publickey for core from 20.161.92.111 port 42444 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:35.516295 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:35.523649 systemd-logind[1497]: New session 10 of user core. Jan 17 00:21:35.531435 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:21:36.141411 sshd[4000]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:36.148709 systemd[1]: sshd@9-46.62.250.181:22-20.161.92.111:42444.service: Deactivated successfully. Jan 17 00:21:36.153730 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:21:36.155190 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:21:36.157207 systemd-logind[1497]: Removed session 10. Jan 17 00:21:41.279492 systemd[1]: Started sshd@10-46.62.250.181:22-20.161.92.111:42448.service - OpenSSH per-connection server daemon (20.161.92.111:42448). Jan 17 00:21:42.057698 sshd[4015]: Accepted publickey for core from 20.161.92.111 port 42448 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:42.060983 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:42.071306 systemd-logind[1497]: New session 11 of user core. Jan 17 00:21:42.076466 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:21:42.690571 sshd[4015]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:42.695780 systemd[1]: sshd@10-46.62.250.181:22-20.161.92.111:42448.service: Deactivated successfully. Jan 17 00:21:42.699735 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:21:42.702571 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:21:42.705760 systemd-logind[1497]: Removed session 11. Jan 17 00:21:47.839741 systemd[1]: Started sshd@11-46.62.250.181:22-20.161.92.111:47956.service - OpenSSH per-connection server daemon (20.161.92.111:47956). Jan 17 00:21:48.607074 sshd[4031]: Accepted publickey for core from 20.161.92.111 port 47956 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:48.609921 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:48.618283 systemd-logind[1497]: New session 12 of user core. Jan 17 00:21:48.631450 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:21:49.238032 sshd[4031]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:49.243112 systemd[1]: sshd@11-46.62.250.181:22-20.161.92.111:47956.service: Deactivated successfully. Jan 17 00:21:49.246877 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:21:49.249773 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:21:49.252423 systemd-logind[1497]: Removed session 12. Jan 17 00:21:49.377624 systemd[1]: Started sshd@12-46.62.250.181:22-20.161.92.111:47964.service - OpenSSH per-connection server daemon (20.161.92.111:47964). Jan 17 00:21:50.150539 sshd[4045]: Accepted publickey for core from 20.161.92.111 port 47964 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:50.153331 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:50.162244 systemd-logind[1497]: New session 13 of user core. Jan 17 00:21:50.173545 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:21:50.847430 sshd[4045]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:50.853806 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:21:50.855189 systemd[1]: sshd@12-46.62.250.181:22-20.161.92.111:47964.service: Deactivated successfully. Jan 17 00:21:50.859013 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:21:50.860721 systemd-logind[1497]: Removed session 13. Jan 17 00:21:50.984605 systemd[1]: Started sshd@13-46.62.250.181:22-20.161.92.111:47972.service - OpenSSH per-connection server daemon (20.161.92.111:47972). Jan 17 00:21:51.748489 sshd[4055]: Accepted publickey for core from 20.161.92.111 port 47972 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:51.750764 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:51.757810 systemd-logind[1497]: New session 14 of user core. Jan 17 00:21:51.769454 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:21:52.375144 sshd[4055]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:52.380263 systemd[1]: sshd@13-46.62.250.181:22-20.161.92.111:47972.service: Deactivated successfully. Jan 17 00:21:52.385169 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:21:52.386686 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:21:52.387907 systemd-logind[1497]: Removed session 14. Jan 17 00:21:57.510941 systemd[1]: Started sshd@14-46.62.250.181:22-20.161.92.111:45004.service - OpenSSH per-connection server daemon (20.161.92.111:45004). Jan 17 00:21:58.286255 sshd[4071]: Accepted publickey for core from 20.161.92.111 port 45004 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:58.288834 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:58.298375 systemd-logind[1497]: New session 15 of user core. Jan 17 00:21:58.310452 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:21:58.919787 sshd[4071]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:58.926801 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:21:58.927951 systemd[1]: sshd@14-46.62.250.181:22-20.161.92.111:45004.service: Deactivated successfully. Jan 17 00:21:58.932781 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:21:58.934674 systemd-logind[1497]: Removed session 15. Jan 17 00:21:59.059688 systemd[1]: Started sshd@15-46.62.250.181:22-20.161.92.111:45018.service - OpenSSH per-connection server daemon (20.161.92.111:45018). Jan 17 00:21:59.832790 sshd[4084]: Accepted publickey for core from 20.161.92.111 port 45018 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:59.835586 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:59.842670 systemd-logind[1497]: New session 16 of user core. Jan 17 00:21:59.850446 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:22:00.490989 sshd[4084]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:00.497612 systemd[1]: sshd@15-46.62.250.181:22-20.161.92.111:45018.service: Deactivated successfully. Jan 17 00:22:00.502297 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:22:00.503508 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:22:00.505400 systemd-logind[1497]: Removed session 16. Jan 17 00:22:00.629630 systemd[1]: Started sshd@16-46.62.250.181:22-20.161.92.111:45024.service - OpenSSH per-connection server daemon (20.161.92.111:45024). Jan 17 00:22:01.399606 sshd[4095]: Accepted publickey for core from 20.161.92.111 port 45024 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:01.401820 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:01.409298 systemd-logind[1497]: New session 17 of user core. Jan 17 00:22:01.414432 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:22:02.705117 sshd[4095]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:02.711869 systemd[1]: sshd@16-46.62.250.181:22-20.161.92.111:45024.service: Deactivated successfully. Jan 17 00:22:02.717306 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:22:02.718813 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:22:02.720687 systemd-logind[1497]: Removed session 17. Jan 17 00:22:02.844680 systemd[1]: Started sshd@17-46.62.250.181:22-20.161.92.111:35300.service - OpenSSH per-connection server daemon (20.161.92.111:35300). Jan 17 00:22:03.612137 sshd[4113]: Accepted publickey for core from 20.161.92.111 port 35300 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:03.615002 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:03.623205 systemd-logind[1497]: New session 18 of user core. Jan 17 00:22:03.632460 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:22:04.403848 sshd[4113]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:04.410037 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:22:04.411461 systemd[1]: sshd@17-46.62.250.181:22-20.161.92.111:35300.service: Deactivated successfully. Jan 17 00:22:04.418982 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:22:04.421946 systemd-logind[1497]: Removed session 18. Jan 17 00:22:04.541619 systemd[1]: Started sshd@18-46.62.250.181:22-20.161.92.111:35304.service - OpenSSH per-connection server daemon (20.161.92.111:35304). Jan 17 00:22:05.314153 sshd[4124]: Accepted publickey for core from 20.161.92.111 port 35304 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:05.317198 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:05.326032 systemd-logind[1497]: New session 19 of user core. Jan 17 00:22:05.331448 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:22:05.949042 sshd[4124]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:05.957145 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:22:05.958693 systemd[1]: sshd@18-46.62.250.181:22-20.161.92.111:35304.service: Deactivated successfully. Jan 17 00:22:05.964502 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:22:05.966772 systemd-logind[1497]: Removed session 19. Jan 17 00:22:11.090619 systemd[1]: Started sshd@19-46.62.250.181:22-20.161.92.111:35314.service - OpenSSH per-connection server daemon (20.161.92.111:35314). Jan 17 00:22:11.838844 sshd[4139]: Accepted publickey for core from 20.161.92.111 port 35314 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:11.841946 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:11.851319 systemd-logind[1497]: New session 20 of user core. Jan 17 00:22:11.855449 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:22:12.471791 sshd[4139]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:12.478434 systemd[1]: sshd@19-46.62.250.181:22-20.161.92.111:35314.service: Deactivated successfully. Jan 17 00:22:12.483574 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:22:12.484917 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:22:12.486785 systemd-logind[1497]: Removed session 20. Jan 17 00:22:17.609667 systemd[1]: Started sshd@20-46.62.250.181:22-20.161.92.111:42924.service - OpenSSH per-connection server daemon (20.161.92.111:42924). Jan 17 00:22:18.382157 sshd[4152]: Accepted publickey for core from 20.161.92.111 port 42924 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:18.384956 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:18.393669 systemd-logind[1497]: New session 21 of user core. Jan 17 00:22:18.402432 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:22:19.015192 sshd[4152]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:19.021650 systemd[1]: sshd@20-46.62.250.181:22-20.161.92.111:42924.service: Deactivated successfully. Jan 17 00:22:19.025822 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:22:19.026977 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:22:19.028886 systemd-logind[1497]: Removed session 21. Jan 17 00:22:19.152634 systemd[1]: Started sshd@21-46.62.250.181:22-20.161.92.111:42934.service - OpenSSH per-connection server daemon (20.161.92.111:42934). Jan 17 00:22:19.920841 sshd[4166]: Accepted publickey for core from 20.161.92.111 port 42934 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:19.923721 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:19.932152 systemd-logind[1497]: New session 22 of user core. Jan 17 00:22:19.942484 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:22:21.655296 containerd[1520]: time="2026-01-17T00:22:21.654885712Z" level=info msg="StopContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" with timeout 30 (s)" Jan 17 00:22:21.659842 containerd[1520]: time="2026-01-17T00:22:21.658862191Z" level=info msg="Stop container \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" with signal terminated" Jan 17 00:22:21.697131 systemd[1]: cri-containerd-a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86.scope: Deactivated successfully. Jan 17 00:22:21.714600 containerd[1520]: time="2026-01-17T00:22:21.714548757Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:22:21.729610 containerd[1520]: time="2026-01-17T00:22:21.729539426Z" level=info msg="StopContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" with timeout 2 (s)" Jan 17 00:22:21.730306 containerd[1520]: time="2026-01-17T00:22:21.730172013Z" level=info msg="Stop container \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" with signal terminated" Jan 17 00:22:21.744063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86-rootfs.mount: Deactivated successfully. Jan 17 00:22:21.750655 systemd-networkd[1395]: lxc_health: Link DOWN Jan 17 00:22:21.750669 systemd-networkd[1395]: lxc_health: Lost carrier Jan 17 00:22:21.781399 containerd[1520]: time="2026-01-17T00:22:21.781098500Z" level=info msg="shim disconnected" id=a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86 namespace=k8s.io Jan 17 00:22:21.781399 containerd[1520]: time="2026-01-17T00:22:21.781293552Z" level=warning msg="cleaning up after shim disconnected" id=a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86 namespace=k8s.io Jan 17 00:22:21.781399 containerd[1520]: time="2026-01-17T00:22:21.781310262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:21.785860 systemd[1]: cri-containerd-9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8.scope: Deactivated successfully. Jan 17 00:22:21.786336 systemd[1]: cri-containerd-9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8.scope: Consumed 5.985s CPU time. Jan 17 00:22:21.825631 containerd[1520]: time="2026-01-17T00:22:21.825297691Z" level=info msg="StopContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" returns successfully" Jan 17 00:22:21.827166 containerd[1520]: time="2026-01-17T00:22:21.827080608Z" level=info msg="StopPodSandbox for \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\"" Jan 17 00:22:21.827166 containerd[1520]: time="2026-01-17T00:22:21.827135499Z" level=info msg="Container to stop \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.832992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7-shm.mount: Deactivated successfully. Jan 17 00:22:21.840320 systemd[1]: cri-containerd-59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7.scope: Deactivated successfully. Jan 17 00:22:21.856383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8-rootfs.mount: Deactivated successfully. Jan 17 00:22:21.866084 containerd[1520]: time="2026-01-17T00:22:21.865849346Z" level=info msg="shim disconnected" id=9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8 namespace=k8s.io Jan 17 00:22:21.866084 containerd[1520]: time="2026-01-17T00:22:21.865900436Z" level=warning msg="cleaning up after shim disconnected" id=9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8 namespace=k8s.io Jan 17 00:22:21.866084 containerd[1520]: time="2026-01-17T00:22:21.865914076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:21.885193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7-rootfs.mount: Deactivated successfully. Jan 17 00:22:21.889493 containerd[1520]: time="2026-01-17T00:22:21.889188568Z" level=info msg="shim disconnected" id=59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7 namespace=k8s.io Jan 17 00:22:21.889493 containerd[1520]: time="2026-01-17T00:22:21.889312049Z" level=warning msg="cleaning up after shim disconnected" id=59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7 namespace=k8s.io Jan 17 00:22:21.889493 containerd[1520]: time="2026-01-17T00:22:21.889325919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:21.903049 containerd[1520]: time="2026-01-17T00:22:21.902996636Z" level=info msg="StopContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" returns successfully" Jan 17 00:22:21.903626 containerd[1520]: time="2026-01-17T00:22:21.903584051Z" level=info msg="StopPodSandbox for \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\"" Jan 17 00:22:21.903733 containerd[1520]: time="2026-01-17T00:22:21.903638002Z" level=info msg="Container to stop \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.903733 containerd[1520]: time="2026-01-17T00:22:21.903655582Z" level=info msg="Container to stop \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.903733 containerd[1520]: time="2026-01-17T00:22:21.903672902Z" level=info msg="Container to stop \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.903733 containerd[1520]: time="2026-01-17T00:22:21.903694162Z" level=info msg="Container to stop \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.903733 containerd[1520]: time="2026-01-17T00:22:21.903716803Z" level=info msg="Container to stop \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:22:21.913720 systemd[1]: cri-containerd-4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1.scope: Deactivated successfully. Jan 17 00:22:21.920628 containerd[1520]: time="2026-01-17T00:22:21.920387279Z" level=info msg="TearDown network for sandbox \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\" successfully" Jan 17 00:22:21.920628 containerd[1520]: time="2026-01-17T00:22:21.920421729Z" level=info msg="StopPodSandbox for \"59416fa972822b30beca5c5f4c059ffd57d0186a526dbe06d683876ad2036bc7\" returns successfully" Jan 17 00:22:21.959175 containerd[1520]: time="2026-01-17T00:22:21.958877883Z" level=info msg="shim disconnected" id=4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1 namespace=k8s.io Jan 17 00:22:21.959175 containerd[1520]: time="2026-01-17T00:22:21.958948323Z" level=warning msg="cleaning up after shim disconnected" id=4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1 namespace=k8s.io Jan 17 00:22:21.959175 containerd[1520]: time="2026-01-17T00:22:21.958960883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:21.983357 containerd[1520]: time="2026-01-17T00:22:21.983298867Z" level=info msg="TearDown network for sandbox \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" successfully" Jan 17 00:22:21.983477 containerd[1520]: time="2026-01-17T00:22:21.983366308Z" level=info msg="StopPodSandbox for \"4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1\" returns successfully" Jan 17 00:22:22.041300 kubelet[2570]: I0117 00:22:22.040779 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjg4q\" (UniqueName: \"kubernetes.io/projected/556258f1-69a5-46e0-83a7-afb9985e4a03-kube-api-access-cjg4q\") pod \"556258f1-69a5-46e0-83a7-afb9985e4a03\" (UID: \"556258f1-69a5-46e0-83a7-afb9985e4a03\") " Jan 17 00:22:22.041300 kubelet[2570]: I0117 00:22:22.040842 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/556258f1-69a5-46e0-83a7-afb9985e4a03-cilium-config-path\") pod \"556258f1-69a5-46e0-83a7-afb9985e4a03\" (UID: \"556258f1-69a5-46e0-83a7-afb9985e4a03\") " Jan 17 00:22:22.050642 kubelet[2570]: I0117 00:22:22.049529 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556258f1-69a5-46e0-83a7-afb9985e4a03-kube-api-access-cjg4q" (OuterVolumeSpecName: "kube-api-access-cjg4q") pod "556258f1-69a5-46e0-83a7-afb9985e4a03" (UID: "556258f1-69a5-46e0-83a7-afb9985e4a03"). InnerVolumeSpecName "kube-api-access-cjg4q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:22:22.051165 kubelet[2570]: I0117 00:22:22.051047 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556258f1-69a5-46e0-83a7-afb9985e4a03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "556258f1-69a5-46e0-83a7-afb9985e4a03" (UID: "556258f1-69a5-46e0-83a7-afb9985e4a03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141691 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-run\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141756 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-config-path\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141782 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-xtables-lock\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141807 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-net\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141833 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-hubble-tls\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142565 kubelet[2570]: I0117 00:22:22.141854 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-bpf-maps\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142937 kubelet[2570]: I0117 00:22:22.141886 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cni-path\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.142937 kubelet[2570]: I0117 00:22:22.141910 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-etc-cni-netd\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.147187 kubelet[2570]: I0117 00:22:22.141833 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148259 kubelet[2570]: I0117 00:22:22.141880 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148259 kubelet[2570]: I0117 00:22:22.141963 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148259 kubelet[2570]: I0117 00:22:22.141985 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148259 kubelet[2570]: I0117 00:22:22.147145 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148259 kubelet[2570]: I0117 00:22:22.147330 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.147418 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.147910 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.147396 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-lib-modules\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.148093 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hcnp\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-kube-api-access-7hcnp\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.148121 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-cgroup\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.148553 kubelet[2570]: I0117 00:22:22.148146 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-hostproc\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.149674 kubelet[2570]: I0117 00:22:22.149617 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:22:22.149674 kubelet[2570]: I0117 00:22:22.149682 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150182 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f1c98e-013a-4c46-b67f-e4940d22534d-clustermesh-secrets\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150303 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-kernel\") pod \"b4f1c98e-013a-4c46-b67f-e4940d22534d\" (UID: \"b4f1c98e-013a-4c46-b67f-e4940d22534d\") " Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150370 2570 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cni-path\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150387 2570 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-etc-cni-netd\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150405 2570 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-lib-modules\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150420 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-cgroup\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151236 kubelet[2570]: I0117 00:22:22.150486 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-run\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150501 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cjg4q\" (UniqueName: \"kubernetes.io/projected/556258f1-69a5-46e0-83a7-afb9985e4a03-kube-api-access-cjg4q\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150515 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f1c98e-013a-4c46-b67f-e4940d22534d-cilium-config-path\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150532 2570 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-xtables-lock\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150590 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/556258f1-69a5-46e0-83a7-afb9985e4a03-cilium-config-path\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150606 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-net\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150621 2570 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-hubble-tls\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151583 kubelet[2570]: I0117 00:22:22.150674 2570 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-bpf-maps\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.151847 kubelet[2570]: I0117 00:22:22.150712 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.151847 kubelet[2570]: I0117 00:22:22.150780 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:22:22.156540 kubelet[2570]: I0117 00:22:22.156455 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f1c98e-013a-4c46-b67f-e4940d22534d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:22:22.157180 kubelet[2570]: I0117 00:22:22.157131 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-kube-api-access-7hcnp" (OuterVolumeSpecName: "kube-api-access-7hcnp") pod "b4f1c98e-013a-4c46-b67f-e4940d22534d" (UID: "b4f1c98e-013a-4c46-b67f-e4940d22534d"). InnerVolumeSpecName "kube-api-access-7hcnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:22:22.251924 kubelet[2570]: I0117 00:22:22.251808 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7hcnp\" (UniqueName: \"kubernetes.io/projected/b4f1c98e-013a-4c46-b67f-e4940d22534d-kube-api-access-7hcnp\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.251924 kubelet[2570]: I0117 00:22:22.251854 2570 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-hostproc\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.251924 kubelet[2570]: I0117 00:22:22.251873 2570 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f1c98e-013a-4c46-b67f-e4940d22534d-clustermesh-secrets\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.251924 kubelet[2570]: I0117 00:22:22.251889 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f1c98e-013a-4c46-b67f-e4940d22534d-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-9d03cc5a8b\" DevicePath \"\"" Jan 17 00:22:22.425062 systemd[1]: Removed slice kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice - libcontainer container kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice. Jan 17 00:22:22.425252 systemd[1]: kubepods-burstable-podb4f1c98e_013a_4c46_b67f_e4940d22534d.slice: Consumed 6.113s CPU time. Jan 17 00:22:22.429038 systemd[1]: Removed slice kubepods-besteffort-pod556258f1_69a5_46e0_83a7_afb9985e4a03.slice - libcontainer container kubepods-besteffort-pod556258f1_69a5_46e0_83a7_afb9985e4a03.slice. Jan 17 00:22:22.686236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1-rootfs.mount: Deactivated successfully. Jan 17 00:22:22.686441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e011ac23ebf766669808a6637896ac7e41c117bbd44489d17e4f6824dde72d1-shm.mount: Deactivated successfully. Jan 17 00:22:22.686584 systemd[1]: var-lib-kubelet-pods-556258f1\x2d69a5\x2d46e0\x2d83a7\x2dafb9985e4a03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjg4q.mount: Deactivated successfully. Jan 17 00:22:22.686723 systemd[1]: var-lib-kubelet-pods-b4f1c98e\x2d013a\x2d4c46\x2db67f\x2de4940d22534d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hcnp.mount: Deactivated successfully. Jan 17 00:22:22.686882 systemd[1]: var-lib-kubelet-pods-b4f1c98e\x2d013a\x2d4c46\x2db67f\x2de4940d22534d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:22:22.687023 systemd[1]: var-lib-kubelet-pods-b4f1c98e\x2d013a\x2d4c46\x2db67f\x2de4940d22534d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:22:22.814901 kubelet[2570]: I0117 00:22:22.814777 2570 scope.go:117] "RemoveContainer" containerID="a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86" Jan 17 00:22:22.823392 containerd[1520]: time="2026-01-17T00:22:22.823314433Z" level=info msg="RemoveContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\"" Jan 17 00:22:22.834170 containerd[1520]: time="2026-01-17T00:22:22.833898618Z" level=info msg="RemoveContainer for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" returns successfully" Jan 17 00:22:22.834965 kubelet[2570]: I0117 00:22:22.834917 2570 scope.go:117] "RemoveContainer" containerID="a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86" Jan 17 00:22:22.835369 containerd[1520]: time="2026-01-17T00:22:22.835260861Z" level=error msg="ContainerStatus for \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\": not found" Jan 17 00:22:22.837123 kubelet[2570]: E0117 00:22:22.836828 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\": not found" containerID="a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86" Jan 17 00:22:22.837123 kubelet[2570]: I0117 00:22:22.836880 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86"} err="failed to get container status \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\": rpc error: code = NotFound desc = an error occurred when try to find container \"a87ddeaf360ca7c2ae2b5450a0b227b7c6e3a024326fc8d56c4030359f7a3f86\": not found" Jan 17 00:22:22.837123 kubelet[2570]: I0117 00:22:22.836926 2570 scope.go:117] "RemoveContainer" containerID="9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8" Jan 17 00:22:22.841691 containerd[1520]: time="2026-01-17T00:22:22.841604324Z" level=info msg="RemoveContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\"" Jan 17 00:22:22.848575 containerd[1520]: time="2026-01-17T00:22:22.848487911Z" level=info msg="RemoveContainer for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" returns successfully" Jan 17 00:22:22.848860 kubelet[2570]: I0117 00:22:22.848809 2570 scope.go:117] "RemoveContainer" containerID="910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04" Jan 17 00:22:22.851491 containerd[1520]: time="2026-01-17T00:22:22.850962756Z" level=info msg="RemoveContainer for \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\"" Jan 17 00:22:22.860088 containerd[1520]: time="2026-01-17T00:22:22.859988325Z" level=info msg="RemoveContainer for \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\" returns successfully" Jan 17 00:22:22.860441 kubelet[2570]: I0117 00:22:22.860410 2570 scope.go:117] "RemoveContainer" containerID="cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e" Jan 17 00:22:22.862594 containerd[1520]: time="2026-01-17T00:22:22.862121327Z" level=info msg="RemoveContainer for \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\"" Jan 17 00:22:22.866949 containerd[1520]: time="2026-01-17T00:22:22.866907424Z" level=info msg="RemoveContainer for \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\" returns successfully" Jan 17 00:22:22.867668 kubelet[2570]: I0117 00:22:22.867484 2570 scope.go:117] "RemoveContainer" containerID="3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194" Jan 17 00:22:22.869631 containerd[1520]: time="2026-01-17T00:22:22.869302827Z" level=info msg="RemoveContainer for \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\"" Jan 17 00:22:22.873962 containerd[1520]: time="2026-01-17T00:22:22.873870882Z" level=info msg="RemoveContainer for \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\" returns successfully" Jan 17 00:22:22.874401 kubelet[2570]: I0117 00:22:22.874266 2570 scope.go:117] "RemoveContainer" containerID="2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17" Jan 17 00:22:22.876832 containerd[1520]: time="2026-01-17T00:22:22.876526959Z" level=info msg="RemoveContainer for \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\"" Jan 17 00:22:22.881044 containerd[1520]: time="2026-01-17T00:22:22.880958532Z" level=info msg="RemoveContainer for \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\" returns successfully" Jan 17 00:22:22.881520 kubelet[2570]: I0117 00:22:22.881253 2570 scope.go:117] "RemoveContainer" containerID="9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8" Jan 17 00:22:22.882016 containerd[1520]: time="2026-01-17T00:22:22.881946102Z" level=error msg="ContainerStatus for \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\": not found" Jan 17 00:22:22.882431 kubelet[2570]: E0117 00:22:22.882207 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\": not found" containerID="9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8" Jan 17 00:22:22.882431 kubelet[2570]: I0117 00:22:22.882306 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8"} err="failed to get container status \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9afa0ca61d44c5f2bda537bd898d80c0b21d7d2a900b8a627a958d78fdddbfa8\": not found" Jan 17 00:22:22.882431 kubelet[2570]: I0117 00:22:22.882334 2570 scope.go:117] "RemoveContainer" containerID="910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04" Jan 17 00:22:22.882873 containerd[1520]: time="2026-01-17T00:22:22.882788560Z" level=error msg="ContainerStatus for \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\": not found" Jan 17 00:22:22.883055 kubelet[2570]: E0117 00:22:22.882985 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\": not found" containerID="910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04" Jan 17 00:22:22.883055 kubelet[2570]: I0117 00:22:22.883015 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04"} err="failed to get container status \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\": rpc error: code = NotFound desc = an error occurred when try to find container \"910995765a512104353c07f8196bf18833717e30336e4dd1f08dfd5496827b04\": not found" Jan 17 00:22:22.883055 kubelet[2570]: I0117 00:22:22.883036 2570 scope.go:117] "RemoveContainer" containerID="cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e" Jan 17 00:22:22.883496 containerd[1520]: time="2026-01-17T00:22:22.883431876Z" level=error msg="ContainerStatus for \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\": not found" Jan 17 00:22:22.883794 kubelet[2570]: E0117 00:22:22.883660 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\": not found" containerID="cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e" Jan 17 00:22:22.883794 kubelet[2570]: I0117 00:22:22.883695 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e"} err="failed to get container status \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc6afdd3117610a9d2bdd1f529145d2b0c5d8115008b86bfe58f78b641f2342e\": not found" Jan 17 00:22:22.883794 kubelet[2570]: I0117 00:22:22.883716 2570 scope.go:117] "RemoveContainer" containerID="3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194" Jan 17 00:22:22.884421 containerd[1520]: time="2026-01-17T00:22:22.884311035Z" level=error msg="ContainerStatus for \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\": not found" Jan 17 00:22:22.884815 kubelet[2570]: E0117 00:22:22.884641 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\": not found" containerID="3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194" Jan 17 00:22:22.884815 kubelet[2570]: I0117 00:22:22.884708 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194"} err="failed to get container status \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fd64095aca7ea8056fcc03f6f2ac42502c10855c7c25213331e52dd1051d194\": not found" Jan 17 00:22:22.884815 kubelet[2570]: I0117 00:22:22.884730 2570 scope.go:117] "RemoveContainer" containerID="2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17" Jan 17 00:22:22.886326 containerd[1520]: time="2026-01-17T00:22:22.885160514Z" level=error msg="ContainerStatus for \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\": not found" Jan 17 00:22:22.886451 kubelet[2570]: E0117 00:22:22.885458 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\": not found" containerID="2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17" Jan 17 00:22:22.886451 kubelet[2570]: I0117 00:22:22.885696 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17"} err="failed to get container status \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\": rpc error: code = NotFound desc = an error occurred when try to find container \"2318ab93ec5ca2d809058d9d692a41ef9ca0879fb00ce72a227fd2f86b869f17\": not found" Jan 17 00:22:23.718362 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:23.725899 systemd[1]: sshd@21-46.62.250.181:22-20.161.92.111:42934.service: Deactivated successfully. Jan 17 00:22:23.730514 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:22:23.731985 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:22:23.733796 systemd-logind[1497]: Removed session 22. Jan 17 00:22:23.870507 systemd[1]: Started sshd@22-46.62.250.181:22-20.161.92.111:57980.service - OpenSSH per-connection server daemon (20.161.92.111:57980). Jan 17 00:22:24.415032 kubelet[2570]: I0117 00:22:24.414960 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="556258f1-69a5-46e0-83a7-afb9985e4a03" path="/var/lib/kubelet/pods/556258f1-69a5-46e0-83a7-afb9985e4a03/volumes" Jan 17 00:22:24.416090 kubelet[2570]: I0117 00:22:24.416042 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f1c98e-013a-4c46-b67f-e4940d22534d" path="/var/lib/kubelet/pods/b4f1c98e-013a-4c46-b67f-e4940d22534d/volumes" Jan 17 00:22:24.631411 sshd[4325]: Accepted publickey for core from 20.161.92.111 port 57980 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:24.634161 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:24.642419 systemd-logind[1497]: New session 23 of user core. Jan 17 00:22:24.649492 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:22:25.572194 systemd[1]: Created slice kubepods-burstable-pod4d041f84_db3a_4d06_a136_ddac28029fbe.slice - libcontainer container kubepods-burstable-pod4d041f84_db3a_4d06_a136_ddac28029fbe.slice. Jan 17 00:22:25.665800 sshd[4325]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:25.672584 systemd[1]: sshd@22-46.62.250.181:22-20.161.92.111:57980.service: Deactivated successfully. Jan 17 00:22:25.675694 kubelet[2570]: I0117 00:22:25.675641 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-bpf-maps\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675696 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-hostproc\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675726 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d041f84-db3a-4d06-a136-ddac28029fbe-cilium-config-path\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675751 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-cilium-cgroup\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675772 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-xtables-lock\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675794 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d041f84-db3a-4d06-a136-ddac28029fbe-cilium-ipsec-secrets\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676433 kubelet[2570]: I0117 00:22:25.675816 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d041f84-db3a-4d06-a136-ddac28029fbe-hubble-tls\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.675837 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-cni-path\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.675870 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-host-proc-sys-kernel\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.675912 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-cilium-run\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.675946 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-etc-cni-netd\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.675981 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d041f84-db3a-4d06-a136-ddac28029fbe-clustermesh-secrets\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676675 kubelet[2570]: I0117 00:22:25.676020 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-host-proc-sys-net\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676894 kubelet[2570]: I0117 00:22:25.676047 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kcd\" (UniqueName: \"kubernetes.io/projected/4d041f84-db3a-4d06-a136-ddac28029fbe-kube-api-access-27kcd\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.676894 kubelet[2570]: I0117 00:22:25.676080 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d041f84-db3a-4d06-a136-ddac28029fbe-lib-modules\") pod \"cilium-ljqw5\" (UID: \"4d041f84-db3a-4d06-a136-ddac28029fbe\") " pod="kube-system/cilium-ljqw5" Jan 17 00:22:25.678149 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:22:25.679827 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:22:25.682300 systemd-logind[1497]: Removed session 23. Jan 17 00:22:25.834386 systemd[1]: Started sshd@23-46.62.250.181:22-20.161.92.111:57990.service - OpenSSH per-connection server daemon (20.161.92.111:57990). Jan 17 00:22:25.884136 containerd[1520]: time="2026-01-17T00:22:25.884095333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljqw5,Uid:4d041f84-db3a-4d06-a136-ddac28029fbe,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:25.920544 containerd[1520]: time="2026-01-17T00:22:25.920382689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:25.920544 containerd[1520]: time="2026-01-17T00:22:25.920461880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:25.920544 containerd[1520]: time="2026-01-17T00:22:25.920498000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:25.920959 containerd[1520]: time="2026-01-17T00:22:25.920662973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:25.965511 systemd[1]: Started cri-containerd-3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e.scope - libcontainer container 3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e. Jan 17 00:22:26.010536 containerd[1520]: time="2026-01-17T00:22:26.010472009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljqw5,Uid:4d041f84-db3a-4d06-a136-ddac28029fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\"" Jan 17 00:22:26.019321 containerd[1520]: time="2026-01-17T00:22:26.019263733Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:22:26.035368 containerd[1520]: time="2026-01-17T00:22:26.035288384Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a\"" Jan 17 00:22:26.037156 containerd[1520]: time="2026-01-17T00:22:26.037115682Z" level=info msg="StartContainer for \"db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a\"" Jan 17 00:22:26.080452 systemd[1]: Started cri-containerd-db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a.scope - libcontainer container db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a. Jan 17 00:22:26.135193 containerd[1520]: time="2026-01-17T00:22:26.135030808Z" level=info msg="StartContainer for \"db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a\" returns successfully" Jan 17 00:22:26.153434 systemd[1]: cri-containerd-db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a.scope: Deactivated successfully. Jan 17 00:22:26.208332 containerd[1520]: time="2026-01-17T00:22:26.208187441Z" level=info msg="shim disconnected" id=db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a namespace=k8s.io Jan 17 00:22:26.208332 containerd[1520]: time="2026-01-17T00:22:26.208293001Z" level=warning msg="cleaning up after shim disconnected" id=db2d2a935db4dd7e06c639a669c5feff7a5663e39a7fe7b89336f9dc376fa56a namespace=k8s.io Jan 17 00:22:26.208332 containerd[1520]: time="2026-01-17T00:22:26.208308662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:26.494826 kubelet[2570]: E0117 00:22:26.494762 2570 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:22:26.616404 sshd[4344]: Accepted publickey for core from 20.161.92.111 port 57990 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:26.619118 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:26.626311 systemd-logind[1497]: New session 24 of user core. Jan 17 00:22:26.636432 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:22:26.808003 systemd[1]: run-containerd-runc-k8s.io-3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e-runc.VnTmCY.mount: Deactivated successfully. Jan 17 00:22:26.849090 containerd[1520]: time="2026-01-17T00:22:26.849039535Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:22:26.872334 containerd[1520]: time="2026-01-17T00:22:26.872268385Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5\"" Jan 17 00:22:26.877080 containerd[1520]: time="2026-01-17T00:22:26.873508796Z" level=info msg="StartContainer for \"492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5\"" Jan 17 00:22:26.932539 systemd[1]: Started cri-containerd-492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5.scope - libcontainer container 492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5. Jan 17 00:22:26.989741 containerd[1520]: time="2026-01-17T00:22:26.988189312Z" level=info msg="StartContainer for \"492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5\" returns successfully" Jan 17 00:22:27.010111 systemd[1]: cri-containerd-492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5.scope: Deactivated successfully. Jan 17 00:22:27.035396 kubelet[2570]: E0117 00:22:27.035199 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d041f84_db3a_4d06_a136_ddac28029fbe.slice/cri-containerd-492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5.scope\": RecentStats: unable to find data in memory cache]" Jan 17 00:22:27.051934 containerd[1520]: time="2026-01-17T00:22:27.051588947Z" level=info msg="shim disconnected" id=492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5 namespace=k8s.io Jan 17 00:22:27.051934 containerd[1520]: time="2026-01-17T00:22:27.051646788Z" level=warning msg="cleaning up after shim disconnected" id=492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5 namespace=k8s.io Jan 17 00:22:27.051934 containerd[1520]: time="2026-01-17T00:22:27.051659738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:27.154430 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:27.160330 systemd[1]: sshd@23-46.62.250.181:22-20.161.92.111:57990.service: Deactivated successfully. Jan 17 00:22:27.164087 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:22:27.167449 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:22:27.170126 systemd-logind[1497]: Removed session 24. Jan 17 00:22:27.297691 systemd[1]: Started sshd@24-46.62.250.181:22-20.161.92.111:57996.service - OpenSSH per-connection server daemon (20.161.92.111:57996). Jan 17 00:22:27.808606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-492c8d8b801429f38ef7c0e3fcad3e21d55bf9b9aaf41a91b5118059316072d5-rootfs.mount: Deactivated successfully. Jan 17 00:22:27.857331 containerd[1520]: time="2026-01-17T00:22:27.857159195Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:22:27.884839 containerd[1520]: time="2026-01-17T00:22:27.884786654Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406\"" Jan 17 00:22:27.884991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367136122.mount: Deactivated successfully. Jan 17 00:22:27.886360 containerd[1520]: time="2026-01-17T00:22:27.886078356Z" level=info msg="StartContainer for \"85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406\"" Jan 17 00:22:27.945528 systemd[1]: Started cri-containerd-85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406.scope - libcontainer container 85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406. Jan 17 00:22:28.015285 systemd[1]: cri-containerd-85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406.scope: Deactivated successfully. Jan 17 00:22:28.018184 containerd[1520]: time="2026-01-17T00:22:28.017765709Z" level=info msg="StartContainer for \"85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406\" returns successfully" Jan 17 00:22:28.063944 containerd[1520]: time="2026-01-17T00:22:28.063756755Z" level=info msg="shim disconnected" id=85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406 namespace=k8s.io Jan 17 00:22:28.063944 containerd[1520]: time="2026-01-17T00:22:28.063816846Z" level=warning msg="cleaning up after shim disconnected" id=85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406 namespace=k8s.io Jan 17 00:22:28.063944 containerd[1520]: time="2026-01-17T00:22:28.063829836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:28.077736 sshd[4516]: Accepted publickey for core from 20.161.92.111 port 57996 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:28.081521 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:28.092504 systemd-logind[1497]: New session 25 of user core. Jan 17 00:22:28.097552 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:22:28.808182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85a5523363392452645cc3f7770bf2c3ced512498e668dafac2eeff18c1fa406-rootfs.mount: Deactivated successfully. Jan 17 00:22:28.865409 containerd[1520]: time="2026-01-17T00:22:28.865197393Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:22:28.896152 containerd[1520]: time="2026-01-17T00:22:28.895946748Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c\"" Jan 17 00:22:28.897755 containerd[1520]: time="2026-01-17T00:22:28.897165940Z" level=info msg="StartContainer for \"512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c\"" Jan 17 00:22:28.958386 systemd[1]: Started cri-containerd-512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c.scope - libcontainer container 512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c. Jan 17 00:22:28.990368 systemd[1]: cri-containerd-512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c.scope: Deactivated successfully. Jan 17 00:22:28.993802 containerd[1520]: time="2026-01-17T00:22:28.993744696Z" level=info msg="StartContainer for \"512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c\" returns successfully" Jan 17 00:22:29.025710 containerd[1520]: time="2026-01-17T00:22:29.025568649Z" level=info msg="shim disconnected" id=512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c namespace=k8s.io Jan 17 00:22:29.025710 containerd[1520]: time="2026-01-17T00:22:29.025681710Z" level=warning msg="cleaning up after shim disconnected" id=512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c namespace=k8s.io Jan 17 00:22:29.025710 containerd[1520]: time="2026-01-17T00:22:29.025696510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:29.808486 systemd[1]: run-containerd-runc-k8s.io-512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c-runc.WNAcRi.mount: Deactivated successfully. Jan 17 00:22:29.808741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512aaaada405eb10eacfb78562b8122c5fb739b3b65675fbb5dde4aa18216e7c-rootfs.mount: Deactivated successfully. Jan 17 00:22:29.864700 containerd[1520]: time="2026-01-17T00:22:29.864536994Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:22:29.889262 containerd[1520]: time="2026-01-17T00:22:29.888799986Z" level=info msg="CreateContainer within sandbox \"3ce8e2449b4cbbae219437bd49403a2a4e0831878e2322aeca12c994b3a1363e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5\"" Jan 17 00:22:29.892520 containerd[1520]: time="2026-01-17T00:22:29.892364739Z" level=info msg="StartContainer for \"88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5\"" Jan 17 00:22:29.930139 kubelet[2570]: I0117 00:22:29.930057 2570 setters.go:618] "Node became not ready" node="ci-4081-3-6-n-9d03cc5a8b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:22:29Z","lastTransitionTime":"2026-01-17T00:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:22:29.962451 systemd[1]: Started cri-containerd-88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5.scope - libcontainer container 88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5. Jan 17 00:22:30.016486 containerd[1520]: time="2026-01-17T00:22:30.016347888Z" level=info msg="StartContainer for \"88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5\" returns successfully" Jan 17 00:22:30.411990 kubelet[2570]: E0117 00:22:30.411543 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-94t7l" podUID="d71a858e-7706-40fb-8fec-5d61fc44e6c1" Jan 17 00:22:30.428264 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:22:32.814857 systemd[1]: run-containerd-runc-k8s.io-88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5-runc.jhc0dd.mount: Deactivated successfully. Jan 17 00:22:33.677482 systemd-networkd[1395]: lxc_health: Link UP Jan 17 00:22:33.681734 systemd-networkd[1395]: lxc_health: Gained carrier Jan 17 00:22:33.899195 kubelet[2570]: I0117 00:22:33.899145 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ljqw5" podStartSLOduration=8.899132856 podStartE2EDuration="8.899132856s" podCreationTimestamp="2026-01-17 00:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:30.885552377 +0000 UTC m=+164.591922595" watchObservedRunningTime="2026-01-17 00:22:33.899132856 +0000 UTC m=+167.605503034" Jan 17 00:22:35.024003 systemd[1]: run-containerd-runc-k8s.io-88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5-runc.oxkJry.mount: Deactivated successfully. Jan 17 00:22:35.602467 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jan 17 00:22:37.252847 kubelet[2570]: E0117 00:22:37.252192 2570 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57218->127.0.0.1:39911: write tcp 127.0.0.1:57218->127.0.0.1:39911: write: connection reset by peer Jan 17 00:22:39.372982 systemd[1]: run-containerd-runc-k8s.io-88374a5b95e596e20959eacc6a826080071e7f38bb7db85816f7f6b7a450dce5-runc.QU3FGj.mount: Deactivated successfully. Jan 17 00:22:39.575187 sshd[4516]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:39.582993 systemd[1]: sshd@24-46.62.250.181:22-20.161.92.111:57996.service: Deactivated successfully. Jan 17 00:22:39.587876 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:22:39.589355 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:22:39.591266 systemd-logind[1497]: Removed session 25.