Jan 23 18:49:18.992406 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:49:18.992451 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:18.992468 kernel: BIOS-provided physical RAM map: Jan 23 18:49:18.992480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:49:18.992497 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 23 18:49:18.992508 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 18:49:18.992521 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 18:49:18.992533 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:49:18.992544 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:49:18.992555 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:49:18.992567 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 18:49:18.992579 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 18:49:18.992590 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:49:18.992607 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:49:18.992621 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:49:18.992633 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 18:49:18.992645 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:49:18.992657 kernel: NX (Execute Disable) protection: active Jan 23 18:49:18.992674 kernel: APIC: Static calls initialized Jan 23 18:49:18.992686 kernel: e820: update [mem 0x7dfac018-0x7dfb5a57] usable ==> usable Jan 23 18:49:18.992698 kernel: e820: update [mem 0x7df70018-0x7dfab657] usable ==> usable Jan 23 18:49:18.992710 kernel: e820: update [mem 0x7df34018-0x7df6f657] usable ==> usable Jan 23 18:49:18.992722 kernel: extended physical RAM map: Jan 23 18:49:18.992734 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:49:18.992746 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007df34017] usable Jan 23 18:49:18.992758 kernel: reserve setup_data: [mem 0x000000007df34018-0x000000007df6f657] usable Jan 23 18:49:18.992770 kernel: reserve setup_data: [mem 0x000000007df6f658-0x000000007df70017] usable Jan 23 18:49:18.992782 kernel: reserve setup_data: [mem 0x000000007df70018-0x000000007dfab657] usable Jan 23 18:49:18.992794 kernel: reserve setup_data: [mem 0x000000007dfab658-0x000000007dfac017] usable Jan 23 18:49:18.992812 kernel: reserve setup_data: [mem 0x000000007dfac018-0x000000007dfb5a57] usable Jan 23 18:49:18.992824 kernel: reserve setup_data: [mem 0x000000007dfb5a58-0x000000007ed3efff] usable Jan 23 18:49:18.992836 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 18:49:18.992848 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 18:49:18.992906 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:49:18.992918 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:49:18.992930 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:49:18.992942 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 18:49:18.992954 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 18:49:18.992967 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:49:18.992979 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:49:18.993003 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:49:18.993016 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 18:49:18.993028 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:49:18.993041 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 18:49:18.993054 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 RNG=0x7fb73018 Jan 23 18:49:18.993072 kernel: random: crng init done Jan 23 18:49:18.993085 kernel: efi: Remove mem136: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:49:18.993097 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:49:18.993110 kernel: secureboot: Secure boot disabled Jan 23 18:49:18.993122 kernel: SMBIOS 3.0.0 present. Jan 23 18:49:18.993135 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 23 18:49:18.993147 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:49:18.993159 kernel: Hypervisor detected: KVM Jan 23 18:49:18.993172 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 18:49:18.993184 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:49:18.993198 kernel: kvm-clock: using sched offset of 13468121253 cycles Jan 23 18:49:18.993217 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:49:18.993231 kernel: tsc: Detected 2399.996 MHz processor Jan 23 18:49:18.994292 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:49:18.994314 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:49:18.994326 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 23 18:49:18.994337 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:49:18.994347 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:49:18.994357 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 18:49:18.994368 kernel: Using GB pages for direct mapping Jan 23 18:49:18.994383 kernel: ACPI: Early table checksum verification disabled Jan 23 18:49:18.994394 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 18:49:18.994404 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:49:18.994414 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994425 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994434 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 18:49:18.994445 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994455 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994465 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994479 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:18.994490 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:49:18.994500 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 23 18:49:18.994510 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 23 18:49:18.994520 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 18:49:18.994530 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 23 18:49:18.994540 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 23 18:49:18.994550 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 23 18:49:18.994560 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 23 18:49:18.994574 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 23 18:49:18.994584 kernel: No NUMA configuration found Jan 23 18:49:18.994595 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 23 18:49:18.994605 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Jan 23 18:49:18.994615 kernel: Zone ranges: Jan 23 18:49:18.994626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:49:18.994636 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:49:18.994646 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 18:49:18.994656 kernel: Device empty Jan 23 18:49:18.994670 kernel: Movable zone start for each node Jan 23 18:49:18.994680 kernel: Early memory node ranges Jan 23 18:49:18.994690 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:49:18.994700 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 23 18:49:18.994711 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 23 18:49:18.994721 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 23 18:49:18.994731 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 18:49:18.994741 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 23 18:49:18.994751 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:49:18.994761 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:49:18.994775 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:49:18.994785 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:49:18.994796 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 23 18:49:18.994806 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 18:49:18.994816 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:49:18.994826 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:49:18.994836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:49:18.994846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:49:18.994868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:49:18.994882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:49:18.994892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:49:18.994902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:49:18.994912 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:49:18.994922 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:49:18.994932 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:49:18.994943 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:49:18.994968 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:49:18.994978 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:49:18.994989 kernel: CPU topo: Num. cores per package: 2 Jan 23 18:49:18.994999 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:49:18.995010 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:49:18.995024 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:49:18.995035 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 18:49:18.995045 kernel: Booting paravirtualized kernel on KVM Jan 23 18:49:18.995056 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:49:18.995067 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:49:18.995082 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:49:18.995093 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:49:18.995103 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:49:18.995113 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 23 18:49:18.995125 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:18.995136 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:49:18.995147 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:49:18.995157 kernel: Fallback order for Node 0: 0 Jan 23 18:49:18.995172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Jan 23 18:49:18.995183 kernel: Policy zone: Normal Jan 23 18:49:18.995193 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:49:18.995203 kernel: software IO TLB: area num 2. Jan 23 18:49:18.995214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:49:18.995224 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:49:18.995235 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:49:18.996299 kernel: Dynamic Preempt: voluntary Jan 23 18:49:18.996318 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:49:18.996331 kernel: rcu: RCU event tracing is enabled. Jan 23 18:49:18.996348 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:49:18.996360 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:49:18.996371 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:49:18.996382 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:49:18.996393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:49:18.996403 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:49:18.996414 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:18.996425 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:18.996436 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:18.996451 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:49:18.996461 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:49:18.996472 kernel: Console: colour dummy device 80x25 Jan 23 18:49:18.996483 kernel: printk: legacy console [tty0] enabled Jan 23 18:49:18.996493 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:49:18.996504 kernel: ACPI: Core revision 20240827 Jan 23 18:49:18.996514 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:49:18.996525 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:49:18.996535 kernel: x2apic enabled Jan 23 18:49:18.996550 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:49:18.996560 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:49:18.996571 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229833f6470, max_idle_ns: 440795327230 ns Jan 23 18:49:18.996582 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399996) Jan 23 18:49:18.996593 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:49:18.996603 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:49:18.996614 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:49:18.996624 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:49:18.996635 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 18:49:18.996649 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:49:18.996660 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:49:18.996671 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:49:18.996681 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 23 18:49:18.996691 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:49:18.996702 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:49:18.996713 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:49:18.996723 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:49:18.996738 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:49:18.996748 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:49:18.996759 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:49:18.996769 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:49:18.996780 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:49:18.996791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:49:18.996801 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:49:18.996812 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:49:18.996822 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:49:18.996837 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 18:49:18.996847 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 18:49:18.996868 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:49:18.996879 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:49:18.996889 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:49:18.996900 kernel: landlock: Up and running. Jan 23 18:49:18.996910 kernel: SELinux: Initializing. Jan 23 18:49:18.996921 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:18.996931 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:18.996946 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 23 18:49:18.996957 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 18:49:18.996967 kernel: ... version: 0 Jan 23 18:49:18.996978 kernel: ... bit width: 48 Jan 23 18:49:18.996988 kernel: ... generic registers: 6 Jan 23 18:49:18.996999 kernel: ... value mask: 0000ffffffffffff Jan 23 18:49:18.997009 kernel: ... max period: 00007fffffffffff Jan 23 18:49:18.997020 kernel: ... fixed-purpose events: 0 Jan 23 18:49:18.997030 kernel: ... event mask: 000000000000003f Jan 23 18:49:18.997045 kernel: signal: max sigframe size: 3376 Jan 23 18:49:18.997055 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:49:18.997066 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:49:18.997077 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:49:18.997088 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:49:18.997098 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:49:18.997109 kernel: .... node #0, CPUs: #1 Jan 23 18:49:18.997119 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:49:18.997130 kernel: smpboot: Total of 2 processors activated (9599.98 BogoMIPS) Jan 23 18:49:18.997141 kernel: Memory: 3848516K/4091168K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 237016K reserved, 0K cma-reserved) Jan 23 18:49:18.997156 kernel: devtmpfs: initialized Jan 23 18:49:18.997166 kernel: x86/mm: Memory block size: 128MB Jan 23 18:49:18.997177 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 18:49:18.997187 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:49:18.997198 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:49:18.997208 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:49:18.997219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:49:18.997230 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:49:18.997264 kernel: audit: type=2000 audit(1769194155.515:1): state=initialized audit_enabled=0 res=1 Jan 23 18:49:18.997304 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:49:18.997324 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:49:18.997335 kernel: cpuidle: using governor menu Jan 23 18:49:18.997345 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:49:18.997356 kernel: dca service started, version 1.12.1 Jan 23 18:49:18.997366 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:49:18.997377 kernel: PCI: Using configuration type 1 for base access Jan 23 18:49:18.997387 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:49:18.997403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:49:18.997414 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:49:18.997424 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:49:18.997435 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:49:18.997445 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:49:18.997455 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:49:18.997466 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:49:18.997476 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:49:18.997487 kernel: ACPI: Interpreter enabled Jan 23 18:49:18.997501 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:49:18.997512 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:49:18.997522 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:49:18.997533 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:49:18.997543 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:49:18.997554 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:49:18.997835 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:49:18.998050 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:49:18.998980 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:49:18.999000 kernel: PCI host bridge to bus 0000:00 Jan 23 18:49:18.999214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:49:18.999437 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:49:18.999612 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:49:18.999784 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 18:49:18.999971 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:49:19.000148 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 23 18:49:19.000339 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:49:19.000550 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:49:19.000757 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:49:19.000963 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 18:49:19.001151 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Jan 23 18:49:19.001544 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Jan 23 18:49:19.001758 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:49:19.001962 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:49:19.002164 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.002379 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Jan 23 18:49:19.002566 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:19.002752 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 18:49:19.002986 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:19.003182 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.003410 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Jan 23 18:49:19.003597 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:19.003782 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 18:49:19.004015 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.004202 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Jan 23 18:49:19.004415 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:19.004601 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 18:49:19.004787 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:19.004993 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.005181 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Jan 23 18:49:19.005407 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:19.005594 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:19.005795 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.006000 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Jan 23 18:49:19.006188 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:19.006397 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 18:49:19.008422 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:19.008633 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.008821 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Jan 23 18:49:19.009028 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:19.009215 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 18:49:19.011839 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:19.012069 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.012961 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Jan 23 18:49:19.015370 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:19.015578 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:49:19.015776 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:19.015995 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.016185 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Jan 23 18:49:19.016391 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:19.016636 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:49:19.016826 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:19.017043 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:19.017229 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Jan 23 18:49:19.018382 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:19.018486 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:49:19.018583 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:19.018687 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:49:19.018783 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:49:19.018896 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:49:19.018991 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Jan 23 18:49:19.019086 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Jan 23 18:49:19.019187 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:49:19.020237 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Jan 23 18:49:19.020396 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:49:19.020501 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Jan 23 18:49:19.020605 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Jan 23 18:49:19.020706 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:49:19.020802 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:19.022138 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 18:49:19.022278 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Jan 23 18:49:19.022380 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:19.022488 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 18:49:19.022594 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Jan 23 18:49:19.022694 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Jan 23 18:49:19.022790 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:19.022906 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:49:19.023008 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Jan 23 18:49:19.023105 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:19.023214 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:49:19.023352 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Jan 23 18:49:19.023453 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Jan 23 18:49:19.023548 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:19.023654 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 18:49:19.023755 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Jan 23 18:49:19.023879 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Jan 23 18:49:19.024024 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:19.024032 kernel: acpiphp: Slot [0] registered Jan 23 18:49:19.024146 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:49:19.024258 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Jan 23 18:49:19.024369 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Jan 23 18:49:19.024470 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:49:19.024566 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:19.024576 kernel: acpiphp: Slot [0-2] registered Jan 23 18:49:19.024671 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:19.024678 kernel: acpiphp: Slot [0-3] registered Jan 23 18:49:19.024772 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:19.024782 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:49:19.024801 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:49:19.024809 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:49:19.024815 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:49:19.024822 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:49:19.024828 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:49:19.024834 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:49:19.024839 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:49:19.024845 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:49:19.024860 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:49:19.024866 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:49:19.024872 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:49:19.024877 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:49:19.024885 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:49:19.024891 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:49:19.024899 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:49:19.024904 kernel: iommu: Default domain type: Translated Jan 23 18:49:19.024910 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:49:19.024916 kernel: efivars: Registered efivars operations Jan 23 18:49:19.024924 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:49:19.024930 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:49:19.024935 kernel: e820: reserve RAM buffer [mem 0x7df34018-0x7fffffff] Jan 23 18:49:19.024941 kernel: e820: reserve RAM buffer [mem 0x7df70018-0x7fffffff] Jan 23 18:49:19.024947 kernel: e820: reserve RAM buffer [mem 0x7dfac018-0x7fffffff] Jan 23 18:49:19.024952 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 23 18:49:19.024958 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 18:49:19.024963 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 23 18:49:19.024969 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 23 18:49:19.025069 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:49:19.025165 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:49:19.025287 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:49:19.025295 kernel: vgaarb: loaded Jan 23 18:49:19.025302 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:49:19.025308 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:49:19.025313 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:49:19.025319 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:49:19.025325 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:49:19.025334 kernel: pnp: PnP ACPI init Jan 23 18:49:19.025439 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:49:19.025447 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:49:19.025453 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:49:19.025462 kernel: NET: Registered PF_INET protocol family Jan 23 18:49:19.025467 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:49:19.025473 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:49:19.025479 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:49:19.025487 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:49:19.025492 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:49:19.025498 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:49:19.025504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:19.025510 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:19.025515 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:49:19.025521 kernel: NET: Registered PF_XDP protocol family Jan 23 18:49:19.025624 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:49:19.025728 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:49:19.025824 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 18:49:19.025930 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 18:49:19.026357 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 18:49:19.027056 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:49:19.027162 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:49:19.027279 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:49:19.027383 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Jan 23 18:49:19.027484 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:19.027580 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 18:49:19.027676 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:19.027771 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:19.027897 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 18:49:19.027994 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:19.028090 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 18:49:19.028186 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:19.028302 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:19.028403 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:19.028499 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:19.028594 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 18:49:19.028690 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:19.028786 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:19.028890 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 18:49:19.028986 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:19.029087 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Jan 23 18:49:19.029182 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:19.029292 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 23 18:49:19.029388 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:49:19.029484 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:19.029579 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:19.029674 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 23 18:49:19.029774 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:49:19.029877 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:19.029973 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:19.030068 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 23 18:49:19.030163 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:49:19.030269 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:19.030362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:49:19.030451 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:49:19.030539 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:49:19.030631 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 18:49:19.030721 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:49:19.030810 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 23 18:49:19.030919 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 23 18:49:19.031014 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:19.031119 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 23 18:49:19.031219 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 23 18:49:19.031338 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:19.031439 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:19.031539 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 23 18:49:19.031633 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:19.031734 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 23 18:49:19.031827 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:19.031939 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 23 18:49:19.032032 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 18:49:19.032128 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:19.032228 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 23 18:49:19.032366 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 18:49:19.032460 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:19.032562 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 23 18:49:19.032655 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 23 18:49:19.032747 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:19.032755 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:49:19.032761 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:49:19.032767 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:49:19.032773 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 23 18:49:19.032779 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229833f6470, max_idle_ns: 440795327230 ns Jan 23 18:49:19.032787 kernel: Initialise system trusted keyrings Jan 23 18:49:19.032793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:49:19.032798 kernel: Key type asymmetric registered Jan 23 18:49:19.032804 kernel: Asymmetric key parser 'x509' registered Jan 23 18:49:19.032810 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:49:19.032816 kernel: io scheduler mq-deadline registered Jan 23 18:49:19.032822 kernel: io scheduler kyber registered Jan 23 18:49:19.032827 kernel: io scheduler bfq registered Jan 23 18:49:19.032936 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 18:49:19.033036 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 18:49:19.033133 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 18:49:19.033228 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 18:49:19.033350 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 18:49:19.033449 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 18:49:19.033546 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 18:49:19.033642 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 18:49:19.033738 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 18:49:19.033833 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 18:49:19.033941 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 18:49:19.034037 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 18:49:19.034133 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 18:49:19.034229 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 18:49:19.034346 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 18:49:19.034442 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 18:49:19.034452 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:49:19.034548 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 18:49:19.034646 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 18:49:19.034653 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:49:19.034659 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 18:49:19.034665 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:49:19.034671 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:49:19.034676 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:49:19.034686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:49:19.034692 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:49:19.034697 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:49:19.034798 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:49:19.034898 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:49:19.034991 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:49:18 UTC (1769194158) Jan 23 18:49:19.035082 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:49:19.035092 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Jan 23 18:49:19.035098 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:49:19.035104 kernel: efifb: probing for efifb Jan 23 18:49:19.035109 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 18:49:19.035115 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:49:19.035120 kernel: efifb: scrolling: redraw Jan 23 18:49:19.035126 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:49:19.035132 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:49:19.035138 kernel: fb0: EFI VGA frame buffer device Jan 23 18:49:19.035146 kernel: pstore: Using crash dump compression: deflate Jan 23 18:49:19.035151 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:49:19.035157 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:49:19.035163 kernel: Segment Routing with IPv6 Jan 23 18:49:19.035169 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:49:19.035175 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:49:19.035180 kernel: Key type dns_resolver registered Jan 23 18:49:19.035186 kernel: IPI shorthand broadcast: enabled Jan 23 18:49:19.035192 kernel: sched_clock: Marking stable (2867011256, 233686569)->(3132000125, -31302300) Jan 23 18:49:19.035200 kernel: registered taskstats version 1 Jan 23 18:49:19.035206 kernel: Loading compiled-in X.509 certificates Jan 23 18:49:19.035211 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:49:19.035217 kernel: Demotion targets for Node 0: null Jan 23 18:49:19.035223 kernel: Key type .fscrypt registered Jan 23 18:49:19.035228 kernel: Key type fscrypt-provisioning registered Jan 23 18:49:19.035234 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:49:19.035240 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:49:19.035257 kernel: ima: No architecture policies found Jan 23 18:49:19.035265 kernel: clk: Disabling unused clocks Jan 23 18:49:19.035271 kernel: Warning: unable to open an initial console. Jan 23 18:49:19.035277 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:49:19.035282 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:49:19.035288 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:49:19.035294 kernel: Run /init as init process Jan 23 18:49:19.035299 kernel: with arguments: Jan 23 18:49:19.035305 kernel: /init Jan 23 18:49:19.035310 kernel: with environment: Jan 23 18:49:19.035318 kernel: HOME=/ Jan 23 18:49:19.035324 kernel: TERM=linux Jan 23 18:49:19.035330 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:49:19.035339 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:19.035345 systemd[1]: Detected virtualization kvm. Jan 23 18:49:19.035351 systemd[1]: Detected architecture x86-64. Jan 23 18:49:19.035357 systemd[1]: Running in initrd. Jan 23 18:49:19.035363 systemd[1]: No hostname configured, using default hostname. Jan 23 18:49:19.035372 systemd[1]: Hostname set to . Jan 23 18:49:19.035378 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:49:19.035384 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:49:19.035390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:19.035396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:19.035403 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:49:19.035409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:19.035417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:49:19.035424 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:49:19.035431 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:49:19.035437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:49:19.035443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:19.035449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:19.035455 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:19.035464 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:19.035470 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:19.035476 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:19.035482 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:19.035488 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:19.035494 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:49:19.035500 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:49:19.035507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:19.035513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:19.035522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:19.035528 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:19.035534 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:49:19.035540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:19.035546 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:49:19.035552 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:49:19.035558 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:49:19.035564 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:19.035572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:19.035578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:19.035584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:19.035591 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:19.035616 systemd-journald[196]: Collecting audit messages is disabled. Jan 23 18:49:19.035633 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:49:19.035640 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:49:19.035646 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:49:19.035654 kernel: Bridge firewalling registered Jan 23 18:49:19.035660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:19.035666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:19.035673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:19.035679 systemd-journald[196]: Journal started Jan 23 18:49:19.035693 systemd-journald[196]: Runtime Journal (/run/log/journal/f8a6c74755e5409e8bb16b971b52cb5d) is 8M, max 76.1M, 68.1M free. Jan 23 18:49:18.990536 systemd-modules-load[198]: Inserted module 'overlay' Jan 23 18:49:19.023314 systemd-modules-load[198]: Inserted module 'br_netfilter' Jan 23 18:49:19.038974 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:19.039921 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:49:19.044339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:49:19.046372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:19.047943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:19.049621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:19.060305 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:49:19.065069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:19.067781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:19.069773 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:19.070754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:19.073378 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:49:19.086309 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:19.103288 systemd-resolved[234]: Positive Trust Anchors: Jan 23 18:49:19.103298 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:19.103318 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:19.107429 systemd-resolved[234]: Defaulting to hostname 'linux'. Jan 23 18:49:19.108295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:19.109912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:19.166281 kernel: SCSI subsystem initialized Jan 23 18:49:19.173271 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:49:19.190305 kernel: iscsi: registered transport (tcp) Jan 23 18:49:19.214576 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:49:19.214653 kernel: QLogic iSCSI HBA Driver Jan 23 18:49:19.238133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:19.255056 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:19.259079 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:19.357177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:19.360990 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:49:19.445371 kernel: raid6: avx512x4 gen() 19201 MB/s Jan 23 18:49:19.464355 kernel: raid6: avx512x2 gen() 22683 MB/s Jan 23 18:49:19.482336 kernel: raid6: avx512x1 gen() 28328 MB/s Jan 23 18:49:19.500298 kernel: raid6: avx2x4 gen() 47083 MB/s Jan 23 18:49:19.518294 kernel: raid6: avx2x2 gen() 49712 MB/s Jan 23 18:49:19.537075 kernel: raid6: avx2x1 gen() 39127 MB/s Jan 23 18:49:19.537137 kernel: raid6: using algorithm avx2x2 gen() 49712 MB/s Jan 23 18:49:19.556059 kernel: raid6: .... xor() 36812 MB/s, rmw enabled Jan 23 18:49:19.556126 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:49:19.572301 kernel: xor: automatically using best checksumming function avx Jan 23 18:49:19.683282 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:49:19.694142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:19.696088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:19.750687 systemd-udevd[446]: Using default interface naming scheme 'v255'. Jan 23 18:49:19.763327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:19.768452 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:49:19.810369 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Jan 23 18:49:19.864949 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:19.869156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:19.987569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:19.993512 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:49:20.083342 kernel: ACPI: bus type USB registered Jan 23 18:49:20.089307 kernel: usbcore: registered new interface driver usbfs Jan 23 18:49:20.101320 kernel: usbcore: registered new interface driver hub Jan 23 18:49:20.106776 kernel: usbcore: registered new device driver usb Jan 23 18:49:20.106813 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 18:49:20.122674 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:49:20.140260 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 18:49:20.146303 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 18:49:20.146461 kernel: scsi host0: Virtio SCSI HBA Jan 23 18:49:20.156786 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 18:49:20.156342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:20.156433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:20.157642 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:20.163375 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 18:49:20.162032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:20.162557 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:20.173695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:20.174163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:20.176324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:20.186945 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 18:49:20.187124 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 18:49:20.187266 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 18:49:20.192344 kernel: AES CTR mode by8 optimization enabled Jan 23 18:49:20.197308 kernel: hub 1-0:1.0: USB hub found Jan 23 18:49:20.197481 kernel: hub 1-0:1.0: 4 ports detected Jan 23 18:49:20.202261 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 18:49:20.204105 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 18:49:20.204301 kernel: libata version 3.00 loaded. Jan 23 18:49:20.204311 kernel: hub 2-0:1.0: USB hub found Jan 23 18:49:20.212326 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 23 18:49:20.212926 kernel: hub 2-0:1.0: 4 ports detected Jan 23 18:49:20.213239 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 18:49:20.214019 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 18:49:20.214158 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 18:49:20.228870 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:49:20.228897 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:49:20.228907 kernel: GPT:17805311 != 160006143 Jan 23 18:49:20.228915 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:49:20.235996 kernel: GPT:17805311 != 160006143 Jan 23 18:49:20.236019 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:49:20.236028 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:20.239302 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 18:49:20.248144 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:49:20.248318 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:49:20.254035 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:49:20.254185 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:49:20.254334 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:49:20.258103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:20.260172 kernel: scsi host1: ahci Jan 23 18:49:20.263294 kernel: scsi host2: ahci Jan 23 18:49:20.268017 kernel: scsi host3: ahci Jan 23 18:49:20.268172 kernel: scsi host4: ahci Jan 23 18:49:20.270887 kernel: scsi host5: ahci Jan 23 18:49:20.271037 kernel: scsi host6: ahci Jan 23 18:49:20.272289 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 lpm-pol 1 Jan 23 18:49:20.277275 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 lpm-pol 1 Jan 23 18:49:20.277294 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 lpm-pol 1 Jan 23 18:49:20.280146 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 lpm-pol 1 Jan 23 18:49:20.283436 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 lpm-pol 1 Jan 23 18:49:20.288003 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 lpm-pol 1 Jan 23 18:49:20.309437 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 18:49:20.321506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 18:49:20.327476 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 18:49:20.327819 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 18:49:20.334450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:20.336125 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:49:20.374008 disk-uuid[644]: Primary Header is updated. Jan 23 18:49:20.374008 disk-uuid[644]: Secondary Entries is updated. Jan 23 18:49:20.374008 disk-uuid[644]: Secondary Header is updated. Jan 23 18:49:20.375854 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:20.438265 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 18:49:20.587293 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:49:20.597283 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:20.597344 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:20.605304 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:20.620308 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:49:20.620359 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 18:49:20.620382 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:49:20.620401 kernel: ata1.00: applying bridge limits Jan 23 18:49:20.625745 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:20.625797 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:20.629317 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 18:49:20.635465 kernel: ata1.00: configured for UDMA/100 Jan 23 18:49:20.642333 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:49:20.680178 kernel: usbcore: registered new interface driver usbhid Jan 23 18:49:20.680278 kernel: usbhid: USB HID core driver Jan 23 18:49:20.693629 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 18:49:20.693699 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 18:49:20.713305 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:49:20.713717 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:49:20.732295 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:49:21.086823 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:21.089386 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:21.090988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:21.091847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:21.095078 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:49:21.137890 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:21.401342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:21.404338 disk-uuid[645]: The operation has completed successfully. Jan 23 18:49:21.500811 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:49:21.500998 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:49:21.539436 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:49:21.569480 sh[678]: Success Jan 23 18:49:21.605886 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:49:21.605928 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:49:21.609325 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:49:21.633323 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:49:21.705039 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:49:21.712380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:49:21.726665 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:49:21.747302 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (691) Jan 23 18:49:21.747376 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:49:21.753805 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:21.772490 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:49:21.772544 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:49:21.776890 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:49:21.782543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:49:21.784391 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:21.785829 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:49:21.787193 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:49:21.793444 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:49:21.838306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (724) Jan 23 18:49:21.850031 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:21.850079 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:21.866796 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:21.866847 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:21.866870 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:21.880605 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:21.882900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:49:21.887453 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:49:22.050057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:22.054439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:22.061995 ignition[791]: Ignition 2.22.0 Jan 23 18:49:22.062011 ignition[791]: Stage: fetch-offline Jan 23 18:49:22.062057 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:22.062072 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:22.062193 ignition[791]: parsed url from cmdline: "" Jan 23 18:49:22.062199 ignition[791]: no config URL provided Jan 23 18:49:22.062215 ignition[791]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:22.062228 ignition[791]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:22.062236 ignition[791]: failed to fetch config: resource requires networking Jan 23 18:49:22.063458 ignition[791]: Ignition finished successfully Jan 23 18:49:22.072853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:22.115630 systemd-networkd[864]: lo: Link UP Jan 23 18:49:22.115646 systemd-networkd[864]: lo: Gained carrier Jan 23 18:49:22.120136 systemd-networkd[864]: Enumeration completed Jan 23 18:49:22.120320 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:22.121639 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:22.121646 systemd-networkd[864]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:22.122637 systemd[1]: Reached target network.target - Network. Jan 23 18:49:22.123228 systemd-networkd[864]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:22.123238 systemd-networkd[864]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:22.123971 systemd-networkd[864]: eth0: Link UP Jan 23 18:49:22.126425 systemd-networkd[864]: eth1: Link UP Jan 23 18:49:22.126832 systemd-networkd[864]: eth0: Gained carrier Jan 23 18:49:22.126850 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:22.127766 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:49:22.134000 systemd-networkd[864]: eth1: Gained carrier Jan 23 18:49:22.134020 systemd-networkd[864]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:22.164449 systemd-networkd[864]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 18:49:22.170607 ignition[868]: Ignition 2.22.0 Jan 23 18:49:22.171132 ignition[868]: Stage: fetch Jan 23 18:49:22.171572 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:22.171909 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:22.171977 ignition[868]: parsed url from cmdline: "" Jan 23 18:49:22.171980 ignition[868]: no config URL provided Jan 23 18:49:22.171985 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:22.171992 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:22.172012 ignition[868]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 18:49:22.172655 ignition[868]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:49:22.189330 systemd-networkd[864]: eth0: DHCPv4 address 77.42.79.158/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 18:49:22.372971 ignition[868]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 18:49:22.382398 ignition[868]: GET result: OK Jan 23 18:49:22.382511 ignition[868]: parsing config with SHA512: eaae2ce7b939c347ff0dc432eb0c391f5582f129148bbc88354abb5283b2d3ed046411649db524bd568cc5e9dc6a527306e71f3048a4be86657ec4e582a24f46 Jan 23 18:49:22.387662 unknown[868]: fetched base config from "system" Jan 23 18:49:22.387680 unknown[868]: fetched base config from "system" Jan 23 18:49:22.388552 ignition[868]: fetch: fetch complete Jan 23 18:49:22.387691 unknown[868]: fetched user config from "hetzner" Jan 23 18:49:22.388570 ignition[868]: fetch: fetch passed Jan 23 18:49:22.388650 ignition[868]: Ignition finished successfully Jan 23 18:49:22.394893 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:49:22.398418 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:49:22.455215 ignition[875]: Ignition 2.22.0 Jan 23 18:49:22.455239 ignition[875]: Stage: kargs Jan 23 18:49:22.455727 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:22.455747 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:22.456799 ignition[875]: kargs: kargs passed Jan 23 18:49:22.456873 ignition[875]: Ignition finished successfully Jan 23 18:49:22.460219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:49:22.465086 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:49:22.517555 ignition[881]: Ignition 2.22.0 Jan 23 18:49:22.517578 ignition[881]: Stage: disks Jan 23 18:49:22.517782 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:22.517801 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:22.518862 ignition[881]: disks: disks passed Jan 23 18:49:22.518949 ignition[881]: Ignition finished successfully Jan 23 18:49:22.522331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:49:22.524076 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:22.525455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:49:22.526958 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:22.528512 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:22.530141 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:22.533320 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:49:22.580911 systemd-fsck[890]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 18:49:22.586774 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:49:22.590902 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:49:22.737293 kernel: EXT4-fs (sda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:49:22.739322 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:49:22.740917 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:22.744348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:22.747572 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:49:22.753440 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:49:22.757285 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:49:22.757336 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:22.781746 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (898) Jan 23 18:49:22.782335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:49:22.798369 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:22.798400 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:22.800459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:49:22.813304 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:22.813372 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:22.816906 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:22.827484 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:22.860790 coreos-metadata[900]: Jan 23 18:49:22.858 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 18:49:22.860790 coreos-metadata[900]: Jan 23 18:49:22.860 INFO Fetch successful Jan 23 18:49:22.860790 coreos-metadata[900]: Jan 23 18:49:22.860 INFO wrote hostname ci-4459-2-3-7-efa5270b02 to /sysroot/etc/hostname Jan 23 18:49:22.863875 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:49:22.889814 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:49:22.900056 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:49:22.908287 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:49:22.916838 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:49:23.088566 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:23.092330 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:49:23.095543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:49:23.119335 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:49:23.125754 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:23.158061 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:49:23.178684 ignition[1015]: INFO : Ignition 2.22.0 Jan 23 18:49:23.180006 ignition[1015]: INFO : Stage: mount Jan 23 18:49:23.180006 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:23.180006 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:23.182496 ignition[1015]: INFO : mount: mount passed Jan 23 18:49:23.182496 ignition[1015]: INFO : Ignition finished successfully Jan 23 18:49:23.184493 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:49:23.187794 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:49:23.218348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:23.255314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1026) Jan 23 18:49:23.255381 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:23.260577 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:23.274313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:23.274362 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:23.281288 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:23.285608 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:23.338939 ignition[1043]: INFO : Ignition 2.22.0 Jan 23 18:49:23.338939 ignition[1043]: INFO : Stage: files Jan 23 18:49:23.340904 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:23.340904 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:23.340904 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:49:23.343334 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:49:23.343334 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:49:23.345698 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:49:23.346833 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:49:23.347621 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:49:23.347365 unknown[1043]: wrote ssh authorized keys file for user: core Jan 23 18:49:23.352186 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:23.353632 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:49:23.450582 systemd-networkd[864]: eth1: Gained IPv6LL Jan 23 18:49:23.633978 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:49:23.939640 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:23.939640 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:49:23.942804 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 18:49:23.962517 systemd-networkd[864]: eth0: Gained IPv6LL Jan 23 18:49:24.237420 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 18:49:24.349159 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:24.350599 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:49:24.357429 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:49:24.673854 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 18:49:25.223522 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:49:25.223522 ignition[1043]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 18:49:25.226664 ignition[1043]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:25.235938 ignition[1043]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:25.235938 ignition[1043]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 18:49:25.235938 ignition[1043]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:25.241232 ignition[1043]: INFO : files: files passed Jan 23 18:49:25.241232 ignition[1043]: INFO : Ignition finished successfully Jan 23 18:49:25.243595 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:49:25.249493 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:49:25.254501 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:49:25.274010 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:49:25.275617 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:49:25.287283 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:25.287283 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:25.290849 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:25.294379 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:25.295836 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:49:25.298947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:49:25.383525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:49:25.383747 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:49:25.386825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:49:25.389081 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:49:25.390328 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:49:25.391792 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:49:25.427575 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:25.431528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:49:25.465223 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:25.467630 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:25.468917 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:49:25.470883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:49:25.471161 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:25.473729 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:49:25.475763 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:49:25.477437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:49:25.479554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:25.481333 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:25.483188 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:25.484965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:49:25.486859 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:25.488833 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:49:25.490923 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:49:25.492700 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:49:25.494701 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:49:25.494898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:25.497530 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:25.499469 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:25.501130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:49:25.502862 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:25.503966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:49:25.504179 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:25.506967 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:49:25.507241 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:25.508950 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:49:25.509188 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:49:25.510720 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:49:25.511039 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:49:25.515554 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:49:25.518360 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:49:25.518727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:25.525548 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:49:25.527529 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:49:25.528831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:25.531384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:49:25.532585 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:25.545973 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:49:25.546183 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:49:25.569393 ignition[1097]: INFO : Ignition 2.22.0 Jan 23 18:49:25.569393 ignition[1097]: INFO : Stage: umount Jan 23 18:49:25.572175 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:25.572175 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:25.572175 ignition[1097]: INFO : umount: umount passed Jan 23 18:49:25.572175 ignition[1097]: INFO : Ignition finished successfully Jan 23 18:49:25.573810 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:49:25.574017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:49:25.576477 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:49:25.576611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:49:25.577777 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:49:25.577854 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:49:25.578896 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:49:25.578993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:49:25.580455 systemd[1]: Stopped target network.target - Network. Jan 23 18:49:25.582982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:49:25.583067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:25.586929 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:49:25.588600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:49:25.593372 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:25.596154 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:49:25.596969 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:49:25.598634 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:49:25.598769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:25.600050 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:49:25.600137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:25.601380 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:49:25.601481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:49:25.602750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:49:25.602836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:25.604389 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:49:25.605710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:49:25.611616 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:49:25.612774 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:49:25.613040 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:49:25.620568 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:49:25.621128 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:49:25.621399 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:49:25.624938 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:49:25.625465 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:49:25.625662 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:49:25.629835 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:49:25.630849 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:49:25.630968 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:25.632371 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:49:25.632484 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:25.635299 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:49:25.637822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:49:25.637969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:25.638865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:49:25.638971 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:25.640074 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:49:25.640157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:25.641538 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:49:25.641622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:25.643387 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:25.650014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:49:25.650136 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:25.664733 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:49:25.664995 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:25.668678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:49:25.668777 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:25.673463 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:49:25.673535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:25.674219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:49:25.674325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:25.677849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:49:25.677947 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:25.682438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:49:25.682519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:25.685720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:49:25.686479 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:49:25.686557 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:25.688396 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:49:25.688470 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:25.690403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:25.690474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:25.694677 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:49:25.694808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:49:25.694958 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:25.695796 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:49:25.696045 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:49:25.714551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:49:25.714810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:49:25.716985 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:49:25.719811 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:49:25.746001 systemd[1]: Switching root. Jan 23 18:49:25.781402 systemd-journald[196]: Journal stopped Jan 23 18:49:27.472601 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Jan 23 18:49:27.472666 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:49:27.472681 kernel: SELinux: policy capability open_perms=1 Jan 23 18:49:27.472690 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:49:27.472699 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:49:27.472710 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:49:27.472730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:49:27.472742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:49:27.472751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:49:27.472759 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:49:27.472768 kernel: audit: type=1403 audit(1769194166.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:49:27.472778 systemd[1]: Successfully loaded SELinux policy in 92.839ms. Jan 23 18:49:27.472802 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.702ms. Jan 23 18:49:27.472812 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:27.472824 systemd[1]: Detected virtualization kvm. Jan 23 18:49:27.472833 systemd[1]: Detected architecture x86-64. Jan 23 18:49:27.472843 systemd[1]: Detected first boot. Jan 23 18:49:27.472852 systemd[1]: Hostname set to . Jan 23 18:49:27.472861 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:49:27.472870 zram_generator::config[1141]: No configuration found. Jan 23 18:49:27.472881 kernel: Guest personality initialized and is inactive Jan 23 18:49:27.472890 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:49:27.472898 kernel: Initialized host personality Jan 23 18:49:27.472909 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:49:27.472927 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:49:27.472939 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:49:27.472948 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:49:27.472957 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:49:27.472966 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:49:27.472975 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:49:27.472986 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:49:27.472994 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:49:27.473003 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:49:27.473012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:49:27.473021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:49:27.473030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:49:27.473039 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:49:27.473048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:27.473057 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:27.473068 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:49:27.473077 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:49:27.473086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:49:27.473096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:27.473105 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:49:27.473114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:27.473125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:27.473134 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:49:27.473143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:49:27.473152 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:27.473161 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:49:27.473170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:27.473183 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:27.473192 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:27.473201 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:27.473210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:49:27.473221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:49:27.473231 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:49:27.473240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:27.473280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:27.473302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:27.473311 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:49:27.473320 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:49:27.473329 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:49:27.473338 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:49:27.473349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:27.473358 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:49:27.473366 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:49:27.473375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:49:27.473384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:49:27.473394 systemd[1]: Reached target machines.target - Containers. Jan 23 18:49:27.473404 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:49:27.473413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:27.473424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:27.473433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:49:27.473442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:27.473451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:27.473460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:27.473469 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:49:27.473477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:27.473486 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:49:27.473497 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:49:27.473506 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:49:27.473515 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:49:27.473523 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:49:27.473533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:27.473542 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:27.473553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:27.473562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:27.473571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:49:27.473580 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:49:27.473591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:27.473601 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:49:27.473610 systemd[1]: Stopped verity-setup.service. Jan 23 18:49:27.473619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:27.473628 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:49:27.473637 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:49:27.473646 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:49:27.473660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:49:27.473669 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:49:27.473680 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:49:27.473689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:27.473698 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:49:27.473707 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:49:27.473716 kernel: loop: module loaded Jan 23 18:49:27.473725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:27.473734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:27.473743 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:49:27.473752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:27.473763 kernel: fuse: init (API version 7.41) Jan 23 18:49:27.473772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:27.473781 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:49:27.473790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:49:27.473799 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:27.473807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:27.473816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:27.473825 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:27.473834 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:49:27.473867 systemd-journald[1225]: Collecting audit messages is disabled. Jan 23 18:49:27.473884 kernel: ACPI: bus type drm_connector registered Jan 23 18:49:27.473895 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:49:27.473906 systemd-journald[1225]: Journal started Jan 23 18:49:27.473929 systemd-journald[1225]: Runtime Journal (/run/log/journal/f8a6c74755e5409e8bb16b971b52cb5d) is 8M, max 76.1M, 68.1M free. Jan 23 18:49:27.473961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:27.069427 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:49:27.096773 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 18:49:27.098155 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:49:27.479262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:27.479288 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:27.491904 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:27.494330 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:49:27.495618 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:49:27.496660 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:49:27.496681 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:27.497786 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:49:27.501346 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:49:27.501802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:27.504343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:49:27.511876 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:49:27.512293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:27.513336 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:49:27.514314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:27.515810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:27.518333 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:49:27.522629 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:49:27.524865 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:49:27.525695 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:49:27.540110 systemd-journald[1225]: Time spent on flushing to /var/log/journal/f8a6c74755e5409e8bb16b971b52cb5d is 51.358ms for 1248 entries. Jan 23 18:49:27.540110 systemd-journald[1225]: System Journal (/var/log/journal/f8a6c74755e5409e8bb16b971b52cb5d) is 8M, max 584.8M, 576.8M free. Jan 23 18:49:27.629874 systemd-journald[1225]: Received client request to flush runtime journal. Jan 23 18:49:27.629914 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 18:49:27.563339 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:49:27.563801 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:49:27.565385 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:49:27.601681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:27.633356 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:49:27.635040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:49:27.645519 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:49:27.663206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:49:27.667668 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 18:49:27.667205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:27.681474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:27.696123 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 23 18:49:27.696428 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 23 18:49:27.704299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:27.705272 kernel: loop2: detected capacity change from 0 to 8 Jan 23 18:49:27.732278 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 18:49:27.762283 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 18:49:27.781273 kernel: loop5: detected capacity change from 0 to 219144 Jan 23 18:49:27.798284 kernel: loop6: detected capacity change from 0 to 8 Jan 23 18:49:27.805272 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 18:49:27.823530 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 18:49:27.826626 (sd-merge)[1293]: Merged extensions into '/usr'. Jan 23 18:49:27.832150 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:49:27.832168 systemd[1]: Reloading... Jan 23 18:49:27.907909 zram_generator::config[1319]: No configuration found. Jan 23 18:49:28.008682 ldconfig[1261]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:49:28.067790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:49:28.067860 systemd[1]: Reloading finished in 235 ms. Jan 23 18:49:28.097870 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:49:28.098647 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:49:28.100634 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:49:28.111206 systemd[1]: Starting ensure-sysext.service... Jan 23 18:49:28.114784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:28.119759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:28.137198 systemd[1]: Reload requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:49:28.137210 systemd[1]: Reloading... Jan 23 18:49:28.164763 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:49:28.165225 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:49:28.165901 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:49:28.166492 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:49:28.170578 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Jan 23 18:49:28.172625 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:49:28.174576 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jan 23 18:49:28.174655 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jan 23 18:49:28.184331 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:28.184453 systemd-tmpfiles[1364]: Skipping /boot Jan 23 18:49:28.197535 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:28.198351 systemd-tmpfiles[1364]: Skipping /boot Jan 23 18:49:28.235284 zram_generator::config[1392]: No configuration found. Jan 23 18:49:28.451316 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:49:28.454782 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:49:28.455308 systemd[1]: Reloading finished in 317 ms. Jan 23 18:49:28.467281 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 18:49:28.466028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:28.474751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:28.483090 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:28.486388 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:49:28.489598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:49:28.494135 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:28.502583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:28.509344 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:49:28.504185 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:49:28.512106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.512645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:28.514335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:28.519811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:28.528961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:28.529487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:28.530558 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:28.535311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:49:28.535624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.543302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.545506 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:49:28.545767 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:49:28.545902 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:49:28.544445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:28.544595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:28.544678 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:28.544759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.553733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.554114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:28.560097 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:28.560594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:28.560667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:28.560766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.563998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:49:28.564985 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:28.565145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:28.577387 systemd[1]: Finished ensure-sysext.service. Jan 23 18:49:28.578038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:28.578208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:28.579507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:28.582038 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:49:28.592580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:28.592784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:28.593282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:28.598611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:49:28.600009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:49:28.600886 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:28.601563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:28.606206 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:49:28.606609 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:49:28.636268 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:49:28.637199 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:49:28.639359 augenrules[1522]: No rules Jan 23 18:49:28.642139 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:28.642399 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:28.652644 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 18:49:28.652688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.652785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:28.655267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:28.657152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:28.674958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:28.675488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:28.676299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:28.676326 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:49:28.676338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:28.678009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:28.679921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:28.705886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:28.706591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:28.707773 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:28.708023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:28.713101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:28.713214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:28.722269 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:49:28.784767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:28.795282 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 18:49:28.809347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:49:28.812294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:28.830978 systemd-networkd[1482]: lo: Link UP Jan 23 18:49:28.830989 systemd-networkd[1482]: lo: Gained carrier Jan 23 18:49:28.837581 systemd-networkd[1482]: Enumeration completed Jan 23 18:49:28.837655 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:28.840343 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:28.840348 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:28.840912 systemd-networkd[1482]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:28.840916 systemd-networkd[1482]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:28.841298 systemd-networkd[1482]: eth0: Link UP Jan 23 18:49:28.841412 systemd-networkd[1482]: eth0: Gained carrier Jan 23 18:49:28.841423 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:28.842415 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:49:28.846351 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:49:28.847001 systemd-networkd[1482]: eth1: Link UP Jan 23 18:49:28.848738 systemd-networkd[1482]: eth1: Gained carrier Jan 23 18:49:28.848750 systemd-networkd[1482]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:28.875479 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:49:28.879404 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 18:49:28.879628 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 18:49:28.879641 kernel: [drm] features: -context_init Jan 23 18:49:28.882693 kernel: [drm] number of scanouts: 1 Jan 23 18:49:28.882737 kernel: [drm] number of cap sets: 0 Jan 23 18:49:28.885258 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 18:49:28.891849 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 18:49:28.891897 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:49:28.887285 systemd-networkd[1482]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 18:49:28.888661 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:49:28.901263 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 18:49:28.905493 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:49:28.905640 systemd-networkd[1482]: eth0: DHCPv4 address 77.42.79.158/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 18:49:28.917016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:28.917231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:28.920056 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:28.926393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:28.951751 systemd-resolved[1483]: Positive Trust Anchors: Jan 23 18:49:28.951767 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:28.951793 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:28.958984 systemd-resolved[1483]: Using system hostname 'ci-4459-2-3-7-efa5270b02'. Jan 23 18:49:28.962599 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:49:28.964888 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:28.965940 systemd[1]: Reached target network.target - Network. Jan 23 18:49:28.965998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:28.966056 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:49:28.998861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:28.999578 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:28.999822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:49:28.999915 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:49:29.000025 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:49:29.000233 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:49:29.002150 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:49:29.002721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:49:29.003765 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:49:29.003791 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:29.003855 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:29.005768 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:49:29.007240 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:49:29.011317 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:49:29.013160 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:49:29.016014 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:49:29.031368 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:49:29.033646 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:49:29.037421 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:49:29.040633 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:29.043316 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:29.044339 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:29.044409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:29.046707 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:49:29.053494 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:49:29.057736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:49:29.068159 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:49:29.075868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:49:29.082475 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:49:29.083790 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:49:29.091604 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:49:29.100311 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:49:29.119986 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:49:29.124281 coreos-metadata[1580]: Jan 23 18:49:29.122 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 18:49:29.124281 coreos-metadata[1580]: Jan 23 18:49:29.123 INFO Fetch successful Jan 23 18:49:29.124281 coreos-metadata[1580]: Jan 23 18:49:29.123 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 18:49:29.132308 coreos-metadata[1580]: Jan 23 18:49:29.124 INFO Fetch successful Jan 23 18:49:29.126559 oslogin_cache_refresh[1587]: Refreshing passwd entry cache Jan 23 18:49:29.125074 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing passwd entry cache Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting users, quitting Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing group entry cache Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting groups, quitting Jan 23 18:49:29.132877 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:29.131407 oslogin_cache_refresh[1587]: Failure getting users, quitting Jan 23 18:49:29.133881 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:49:29.151966 extend-filesystems[1586]: Found /dev/sda6 Jan 23 18:49:29.151966 extend-filesystems[1586]: Found /dev/sda9 Jan 23 18:49:29.151966 extend-filesystems[1586]: Checking size of /dev/sda9 Jan 23 18:49:29.227987 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 23 18:49:29.131422 oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:29.140598 systemd-timesyncd[1502]: Contacted time server 188.68.34.173:123 (0.flatcar.pool.ntp.org). Jan 23 18:49:29.233084 jq[1583]: false Jan 23 18:49:29.233402 extend-filesystems[1586]: Resized partition /dev/sda9 Jan 23 18:49:29.131458 oslogin_cache_refresh[1587]: Refreshing group entry cache Jan 23 18:49:29.140648 systemd-timesyncd[1502]: Initial clock synchronization to Fri 2026-01-23 18:49:28.964601 UTC. Jan 23 18:49:29.255790 extend-filesystems[1602]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:49:29.132078 oslogin_cache_refresh[1587]: Failure getting groups, quitting Jan 23 18:49:29.144564 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:49:29.132089 oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:29.173522 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:49:29.204676 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:49:29.206474 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:49:29.212922 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:49:29.234380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:49:29.274094 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:49:29.279111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:49:29.284045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:49:29.284643 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:49:29.285304 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:49:29.288270 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:49:29.288951 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:49:29.291037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:49:29.291768 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:49:29.307071 update_engine[1609]: I20260123 18:49:29.307015 1609 main.cc:92] Flatcar Update Engine starting Jan 23 18:49:29.308683 jq[1614]: true Jan 23 18:49:29.341581 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:49:29.348992 dbus-daemon[1581]: [system] SELinux support is enabled Jan 23 18:49:29.351543 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:49:29.355497 update_engine[1609]: I20260123 18:49:29.355391 1609 update_check_scheduler.cc:74] Next update check in 8m49s Jan 23 18:49:29.358348 jq[1632]: true Jan 23 18:49:29.370876 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:49:29.370902 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:49:29.372297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:49:29.372312 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:49:29.374318 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:49:29.379949 tar[1622]: linux-amd64/LICENSE Jan 23 18:49:29.379949 tar[1622]: linux-amd64/helm Jan 23 18:49:29.393567 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:49:29.408065 systemd-logind[1605]: New seat seat0. Jan 23 18:49:29.412119 systemd-logind[1605]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 18:49:29.413386 systemd-logind[1605]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:49:29.413571 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:49:29.448659 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:49:29.449438 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:49:29.503827 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:29.505849 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:49:29.511339 systemd[1]: Starting sshkeys.service... Jan 23 18:49:29.526447 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:49:29.537857 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:49:29.540544 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:49:29.555873 containerd[1626]: time="2026-01-23T18:49:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:49:29.558341 sshd_keygen[1612]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:49:29.558776 containerd[1626]: time="2026-01-23T18:49:29.558758802Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:49:29.570264 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 23 18:49:29.590755 containerd[1626]: time="2026-01-23T18:49:29.572544740Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.24µs" Jan 23 18:49:29.590755 containerd[1626]: time="2026-01-23T18:49:29.572566800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:49:29.590755 containerd[1626]: time="2026-01-23T18:49:29.572582710Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:49:29.587067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:49:29.590909 coreos-metadata[1671]: Jan 23 18:49:29.579 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 18:49:29.590909 coreos-metadata[1671]: Jan 23 18:49:29.580 INFO Fetch successful Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591523993Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591566453Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591586673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591634933Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591641833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591833654Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591842644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591849764Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591855994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:49:29.592324 containerd[1626]: time="2026-01-23T18:49:29.591930104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:49:29.591898 unknown[1671]: wrote ssh authorized keys file for user: core Jan 23 18:49:29.592063 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:49:29.594159 containerd[1626]: time="2026-01-23T18:49:29.593686836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:29.594159 containerd[1626]: time="2026-01-23T18:49:29.593814426Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:29.594159 containerd[1626]: time="2026-01-23T18:49:29.593822356Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:49:29.594159 containerd[1626]: time="2026-01-23T18:49:29.594100417Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:49:29.594761 containerd[1626]: time="2026-01-23T18:49:29.594739707Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:49:29.595009 containerd[1626]: time="2026-01-23T18:49:29.594988268Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:49:29.597088 extend-filesystems[1602]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 18:49:29.597088 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 18:49:29.597088 extend-filesystems[1602]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 23 18:49:29.617863 extend-filesystems[1586]: Resized filesystem in /dev/sda9 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.606978603Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607016593Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607026183Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607034413Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607043153Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607053863Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607062383Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607074853Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607082203Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607089043Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607095553Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607103843Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607202353Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:49:29.618867 containerd[1626]: time="2026-01-23T18:49:29.607214523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:49:29.600406 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607223863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607231433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607238713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607264043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607271673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607278153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607285863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607296093Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607302623Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607334703Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607344023Z" level=info msg="Start snapshots syncer" Jan 23 18:49:29.620543 containerd[1626]: time="2026-01-23T18:49:29.607924424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:49:29.601307 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:49:29.620753 containerd[1626]: time="2026-01-23T18:49:29.608109464Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:49:29.620753 containerd[1626]: time="2026-01-23T18:49:29.608458814Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608538315Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608786985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608806575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608814355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608822235Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608831565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608838665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608846325Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608861205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608871995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.608879485Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.609114095Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.609126275Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:29.620847 containerd[1626]: time="2026-01-23T18:49:29.609132755Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609139615Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609145125Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609152525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609374526Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609391066Z" level=info msg="runtime interface created" Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609395296Z" level=info msg="created NRI interface" Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609402676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609410516Z" level=info msg="Connect containerd service" Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.609428866Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:49:29.621035 containerd[1626]: time="2026-01-23T18:49:29.610872768Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:49:29.626581 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:49:29.629384 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:49:29.631909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:49:29.645270 update-ssh-keys[1689]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:29.647000 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:49:29.652116 systemd[1]: Finished sshkeys.service. Jan 23 18:49:29.659565 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:49:29.664367 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:49:29.666443 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:49:29.669190 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:49:29.692234 containerd[1626]: time="2026-01-23T18:49:29.692200789Z" level=info msg="Start subscribing containerd event" Jan 23 18:49:29.692430 containerd[1626]: time="2026-01-23T18:49:29.692354659Z" level=info msg="Start recovering state" Jan 23 18:49:29.692507 containerd[1626]: time="2026-01-23T18:49:29.692476140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:49:29.692565 containerd[1626]: time="2026-01-23T18:49:29.692549110Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:49:29.692624 containerd[1626]: time="2026-01-23T18:49:29.692556500Z" level=info msg="Start event monitor" Jan 23 18:49:29.692640 containerd[1626]: time="2026-01-23T18:49:29.692623930Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:49:29.692640 containerd[1626]: time="2026-01-23T18:49:29.692631390Z" level=info msg="Start streaming server" Jan 23 18:49:29.692640 containerd[1626]: time="2026-01-23T18:49:29.692638530Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:49:29.692687 containerd[1626]: time="2026-01-23T18:49:29.692644940Z" level=info msg="runtime interface starting up..." Jan 23 18:49:29.692687 containerd[1626]: time="2026-01-23T18:49:29.692649710Z" level=info msg="starting plugins..." Jan 23 18:49:29.692687 containerd[1626]: time="2026-01-23T18:49:29.692665460Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:49:29.692811 containerd[1626]: time="2026-01-23T18:49:29.692792920Z" level=info msg="containerd successfully booted in 0.137358s" Jan 23 18:49:29.692926 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:49:29.798460 tar[1622]: linux-amd64/README.md Jan 23 18:49:29.830464 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:49:30.490629 systemd-networkd[1482]: eth0: Gained IPv6LL Jan 23 18:49:30.499331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:49:30.502504 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:49:30.510570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:30.518643 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:49:30.575037 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:49:30.682598 systemd-networkd[1482]: eth1: Gained IPv6LL Jan 23 18:49:31.866859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:31.871613 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:49:31.875763 systemd[1]: Startup finished in 2.979s (kernel) + 7.345s (initrd) + 5.845s (userspace) = 16.170s. Jan 23 18:49:31.883665 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:32.640201 kubelet[1730]: E0123 18:49:32.640106 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:32.646500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:32.646951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:32.647774 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 258.7M memory peak. Jan 23 18:49:33.693626 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:49:33.695754 systemd[1]: Started sshd@0-77.42.79.158:22-20.161.92.111:53762.service - OpenSSH per-connection server daemon (20.161.92.111:53762). Jan 23 18:49:34.494300 sshd[1742]: Accepted publickey for core from 20.161.92.111 port 53762 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:34.497566 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:34.510311 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:49:34.513475 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:49:34.529348 systemd-logind[1605]: New session 1 of user core. Jan 23 18:49:34.545296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:49:34.551573 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:49:34.576470 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:49:34.581464 systemd-logind[1605]: New session c1 of user core. Jan 23 18:49:34.750147 systemd[1747]: Queued start job for default target default.target. Jan 23 18:49:34.758277 systemd[1747]: Created slice app.slice - User Application Slice. Jan 23 18:49:34.758300 systemd[1747]: Reached target paths.target - Paths. Jan 23 18:49:34.758332 systemd[1747]: Reached target timers.target - Timers. Jan 23 18:49:34.759710 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:49:34.778179 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:49:34.778304 systemd[1747]: Reached target sockets.target - Sockets. Jan 23 18:49:34.778339 systemd[1747]: Reached target basic.target - Basic System. Jan 23 18:49:34.778374 systemd[1747]: Reached target default.target - Main User Target. Jan 23 18:49:34.778403 systemd[1747]: Startup finished in 184ms. Jan 23 18:49:34.778715 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:49:34.786390 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:49:35.329065 systemd[1]: Started sshd@1-77.42.79.158:22-20.161.92.111:53770.service - OpenSSH per-connection server daemon (20.161.92.111:53770). Jan 23 18:49:36.112321 sshd[1758]: Accepted publickey for core from 20.161.92.111 port 53770 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:36.114473 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:36.124334 systemd-logind[1605]: New session 2 of user core. Jan 23 18:49:36.133488 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:49:36.642230 sshd[1761]: Connection closed by 20.161.92.111 port 53770 Jan 23 18:49:36.643604 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:36.651415 systemd[1]: sshd@1-77.42.79.158:22-20.161.92.111:53770.service: Deactivated successfully. Jan 23 18:49:36.655030 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:49:36.656713 systemd-logind[1605]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:49:36.659635 systemd-logind[1605]: Removed session 2. Jan 23 18:49:36.781147 systemd[1]: Started sshd@2-77.42.79.158:22-20.161.92.111:53782.service - OpenSSH per-connection server daemon (20.161.92.111:53782). Jan 23 18:49:37.576759 sshd[1767]: Accepted publickey for core from 20.161.92.111 port 53782 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:37.579693 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:37.589146 systemd-logind[1605]: New session 3 of user core. Jan 23 18:49:37.594453 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:49:38.102840 sshd[1770]: Connection closed by 20.161.92.111 port 53782 Jan 23 18:49:38.104553 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:38.111810 systemd[1]: sshd@2-77.42.79.158:22-20.161.92.111:53782.service: Deactivated successfully. Jan 23 18:49:38.115563 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:49:38.117148 systemd-logind[1605]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:49:38.119702 systemd-logind[1605]: Removed session 3. Jan 23 18:49:38.244730 systemd[1]: Started sshd@3-77.42.79.158:22-20.161.92.111:53784.service - OpenSSH per-connection server daemon (20.161.92.111:53784). Jan 23 18:49:39.029681 sshd[1776]: Accepted publickey for core from 20.161.92.111 port 53784 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:39.032376 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:39.041321 systemd-logind[1605]: New session 4 of user core. Jan 23 18:49:39.048470 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:49:39.564035 sshd[1779]: Connection closed by 20.161.92.111 port 53784 Jan 23 18:49:39.565556 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:39.572889 systemd[1]: sshd@3-77.42.79.158:22-20.161.92.111:53784.service: Deactivated successfully. Jan 23 18:49:39.576557 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:49:39.578035 systemd-logind[1605]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:49:39.580888 systemd-logind[1605]: Removed session 4. Jan 23 18:49:39.703596 systemd[1]: Started sshd@4-77.42.79.158:22-20.161.92.111:53794.service - OpenSSH per-connection server daemon (20.161.92.111:53794). Jan 23 18:49:40.499314 sshd[1785]: Accepted publickey for core from 20.161.92.111 port 53794 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:40.501190 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:40.510064 systemd-logind[1605]: New session 5 of user core. Jan 23 18:49:40.517469 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:49:40.923235 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:49:40.923860 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:40.941492 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:41.063927 sshd[1788]: Connection closed by 20.161.92.111 port 53794 Jan 23 18:49:41.065118 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:41.071606 systemd[1]: sshd@4-77.42.79.158:22-20.161.92.111:53794.service: Deactivated successfully. Jan 23 18:49:41.074964 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:49:41.078707 systemd-logind[1605]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:49:41.080569 systemd-logind[1605]: Removed session 5. Jan 23 18:49:41.198963 systemd[1]: Started sshd@5-77.42.79.158:22-20.161.92.111:53804.service - OpenSSH per-connection server daemon (20.161.92.111:53804). Jan 23 18:49:41.980960 sshd[1795]: Accepted publickey for core from 20.161.92.111 port 53804 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:41.983752 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:41.994211 systemd-logind[1605]: New session 6 of user core. Jan 23 18:49:42.001541 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:49:42.393211 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:49:42.393891 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:42.403166 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:42.413896 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:49:42.414516 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:42.431303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:42.495822 augenrules[1822]: No rules Jan 23 18:49:42.499124 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:42.499674 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:42.501817 sudo[1799]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:42.623085 sshd[1798]: Connection closed by 20.161.92.111 port 53804 Jan 23 18:49:42.625523 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:42.633119 systemd[1]: sshd@5-77.42.79.158:22-20.161.92.111:53804.service: Deactivated successfully. Jan 23 18:49:42.636387 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:49:42.637939 systemd-logind[1605]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:49:42.640817 systemd-logind[1605]: Removed session 6. Jan 23 18:49:42.756972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:49:42.760649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:42.763564 systemd[1]: Started sshd@6-77.42.79.158:22-20.161.92.111:36574.service - OpenSSH per-connection server daemon (20.161.92.111:36574). Jan 23 18:49:42.941018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:42.955579 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:42.984712 kubelet[1841]: E0123 18:49:42.984657 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:42.992509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:42.992670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:42.993151 systemd[1]: kubelet.service: Consumed 197ms CPU time, 110.2M memory peak. Jan 23 18:49:43.538210 sshd[1832]: Accepted publickey for core from 20.161.92.111 port 36574 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:43.540278 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:43.548295 systemd-logind[1605]: New session 7 of user core. Jan 23 18:49:43.557461 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:49:43.951700 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:49:43.952345 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:44.369752 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:49:44.400041 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:49:44.746850 dockerd[1868]: time="2026-01-23T18:49:44.746738060Z" level=info msg="Starting up" Jan 23 18:49:44.748408 dockerd[1868]: time="2026-01-23T18:49:44.748351403Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:49:44.771517 dockerd[1868]: time="2026-01-23T18:49:44.771358275Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:49:44.796551 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3826299029-merged.mount: Deactivated successfully. Jan 23 18:49:44.849127 dockerd[1868]: time="2026-01-23T18:49:44.848791174Z" level=info msg="Loading containers: start." Jan 23 18:49:44.865318 kernel: Initializing XFRM netlink socket Jan 23 18:49:45.339063 systemd-networkd[1482]: docker0: Link UP Jan 23 18:49:45.346118 dockerd[1868]: time="2026-01-23T18:49:45.346056190Z" level=info msg="Loading containers: done." Jan 23 18:49:45.372669 dockerd[1868]: time="2026-01-23T18:49:45.372125504Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:49:45.372669 dockerd[1868]: time="2026-01-23T18:49:45.372221520Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:49:45.372669 dockerd[1868]: time="2026-01-23T18:49:45.372392562Z" level=info msg="Initializing buildkit" Jan 23 18:49:45.413754 dockerd[1868]: time="2026-01-23T18:49:45.413702665Z" level=info msg="Completed buildkit initialization" Jan 23 18:49:45.423142 dockerd[1868]: time="2026-01-23T18:49:45.423104470Z" level=info msg="Daemon has completed initialization" Jan 23 18:49:45.423457 dockerd[1868]: time="2026-01-23T18:49:45.423382296Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:49:45.424338 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:49:45.789391 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4011543074-merged.mount: Deactivated successfully. Jan 23 18:49:46.833939 containerd[1626]: time="2026-01-23T18:49:46.833849704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 18:49:47.389690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862175293.mount: Deactivated successfully. Jan 23 18:49:48.563324 containerd[1626]: time="2026-01-23T18:49:48.563265663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:48.567668 containerd[1626]: time="2026-01-23T18:49:48.567481124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068173" Jan 23 18:49:48.569080 containerd[1626]: time="2026-01-23T18:49:48.569061560Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:48.571278 containerd[1626]: time="2026-01-23T18:49:48.571260179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:48.571906 containerd[1626]: time="2026-01-23T18:49:48.571886477Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.737988765s" Jan 23 18:49:48.571948 containerd[1626]: time="2026-01-23T18:49:48.571910629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 18:49:48.572335 containerd[1626]: time="2026-01-23T18:49:48.572311574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 18:49:49.738064 containerd[1626]: time="2026-01-23T18:49:49.738002492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.739080 containerd[1626]: time="2026-01-23T18:49:49.739048628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162462" Jan 23 18:49:49.740073 containerd[1626]: time="2026-01-23T18:49:49.739902437Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.742661 containerd[1626]: time="2026-01-23T18:49:49.742643771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.743745 containerd[1626]: time="2026-01-23T18:49:49.743700848Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.171368941s" Jan 23 18:49:49.743781 containerd[1626]: time="2026-01-23T18:49:49.743750821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 18:49:49.744318 containerd[1626]: time="2026-01-23T18:49:49.744295137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 18:49:50.798233 containerd[1626]: time="2026-01-23T18:49:50.798173058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:50.799961 containerd[1626]: time="2026-01-23T18:49:50.799585025Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725949" Jan 23 18:49:50.800825 containerd[1626]: time="2026-01-23T18:49:50.800799553Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:50.806385 containerd[1626]: time="2026-01-23T18:49:50.806353465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:50.808110 containerd[1626]: time="2026-01-23T18:49:50.808048383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.063729356s" Jan 23 18:49:50.808252 containerd[1626]: time="2026-01-23T18:49:50.808213932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 18:49:50.808647 containerd[1626]: time="2026-01-23T18:49:50.808602423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:49:52.110560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579039024.mount: Deactivated successfully. Jan 23 18:49:52.502459 containerd[1626]: time="2026-01-23T18:49:52.502385542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:52.503677 containerd[1626]: time="2026-01-23T18:49:52.503610979Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965321" Jan 23 18:49:52.504635 containerd[1626]: time="2026-01-23T18:49:52.504558589Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:52.506121 containerd[1626]: time="2026-01-23T18:49:52.506054762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:52.506340 containerd[1626]: time="2026-01-23T18:49:52.506298039Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.697673947s" Jan 23 18:49:52.506340 containerd[1626]: time="2026-01-23T18:49:52.506335136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:49:52.506946 containerd[1626]: time="2026-01-23T18:49:52.506902357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 18:49:53.016752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:49:53.020601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:53.053551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778850007.mount: Deactivated successfully. Jan 23 18:49:53.322695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:53.336763 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:53.378162 kubelet[2172]: E0123 18:49:53.378132 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:53.380927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:53.381182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:53.381671 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108.4M memory peak. Jan 23 18:49:54.106663 containerd[1626]: time="2026-01-23T18:49:54.106531613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.107883 containerd[1626]: time="2026-01-23T18:49:54.107863329Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Jan 23 18:49:54.109260 containerd[1626]: time="2026-01-23T18:49:54.108676687Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.115222 containerd[1626]: time="2026-01-23T18:49:54.114634414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.115441 containerd[1626]: time="2026-01-23T18:49:54.114648782Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.607706648s" Jan 23 18:49:54.115441 containerd[1626]: time="2026-01-23T18:49:54.115336930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 18:49:54.116341 containerd[1626]: time="2026-01-23T18:49:54.115761832Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 18:49:54.583281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506810043.mount: Deactivated successfully. Jan 23 18:49:54.590316 containerd[1626]: time="2026-01-23T18:49:54.590183312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.591514 containerd[1626]: time="2026-01-23T18:49:54.591460207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Jan 23 18:49:54.593304 containerd[1626]: time="2026-01-23T18:49:54.592520356Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.596217 containerd[1626]: time="2026-01-23T18:49:54.596159722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:54.597109 containerd[1626]: time="2026-01-23T18:49:54.597056835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 480.658937ms" Jan 23 18:49:54.597186 containerd[1626]: time="2026-01-23T18:49:54.597109788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 18:49:54.598858 containerd[1626]: time="2026-01-23T18:49:54.598800587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 18:49:55.140980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539337793.mount: Deactivated successfully. Jan 23 18:49:57.095348 containerd[1626]: time="2026-01-23T18:49:57.095260912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:57.097157 containerd[1626]: time="2026-01-23T18:49:57.096808602Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166872" Jan 23 18:49:57.098120 containerd[1626]: time="2026-01-23T18:49:57.098091280Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:57.100732 containerd[1626]: time="2026-01-23T18:49:57.100699579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:57.102169 containerd[1626]: time="2026-01-23T18:49:57.102125271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.503251447s" Jan 23 18:49:57.102302 containerd[1626]: time="2026-01-23T18:49:57.102283107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 18:50:00.707488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:00.708214 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108.4M memory peak. Jan 23 18:50:00.712560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:00.742503 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Jan 23 18:50:00.742517 systemd[1]: Reloading... Jan 23 18:50:00.854510 zram_generator::config[2349]: No configuration found. Jan 23 18:50:01.035095 systemd[1]: Reloading finished in 292 ms. Jan 23 18:50:01.082890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:01.091108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:01.092288 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:50:01.092938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:01.093153 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Jan 23 18:50:01.096028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:01.264157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:01.271103 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:50:01.298454 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:50:01.298454 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:01.298454 kubelet[2405]: I0123 18:50:01.297806 2405 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:50:01.602057 kubelet[2405]: I0123 18:50:01.601878 2405 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:50:01.602238 kubelet[2405]: I0123 18:50:01.602228 2405 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:50:01.602325 kubelet[2405]: I0123 18:50:01.602318 2405 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:50:01.602364 kubelet[2405]: I0123 18:50:01.602357 2405 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:50:01.602562 kubelet[2405]: I0123 18:50:01.602553 2405 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:50:01.607627 kubelet[2405]: E0123 18:50:01.607604 2405 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://77.42.79.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:50:01.609080 kubelet[2405]: I0123 18:50:01.609035 2405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:50:01.614118 kubelet[2405]: I0123 18:50:01.614087 2405 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:50:01.621248 kubelet[2405]: I0123 18:50:01.621217 2405 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:50:01.621701 kubelet[2405]: I0123 18:50:01.621664 2405 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:50:01.621924 kubelet[2405]: I0123 18:50:01.621697 2405 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-7-efa5270b02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:50:01.622004 kubelet[2405]: I0123 18:50:01.621928 2405 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:50:01.622004 kubelet[2405]: I0123 18:50:01.621943 2405 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:50:01.622108 kubelet[2405]: I0123 18:50:01.622081 2405 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:50:01.625370 kubelet[2405]: I0123 18:50:01.625342 2405 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:01.625687 kubelet[2405]: I0123 18:50:01.625669 2405 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:50:01.625711 kubelet[2405]: I0123 18:50:01.625693 2405 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:50:01.625739 kubelet[2405]: I0123 18:50:01.625734 2405 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:50:01.625797 kubelet[2405]: I0123 18:50:01.625774 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:50:01.626140 kubelet[2405]: E0123 18:50:01.626121 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.79.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-7-efa5270b02&limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:50:01.632902 kubelet[2405]: I0123 18:50:01.632869 2405 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:50:01.633681 kubelet[2405]: I0123 18:50:01.633651 2405 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:50:01.633711 kubelet[2405]: I0123 18:50:01.633701 2405 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:50:01.633790 kubelet[2405]: W0123 18:50:01.633768 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:50:01.634135 kubelet[2405]: E0123 18:50:01.634118 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.79.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:50:01.641155 kubelet[2405]: I0123 18:50:01.641121 2405 server.go:1262] "Started kubelet" Jan 23 18:50:01.643058 kubelet[2405]: I0123 18:50:01.642722 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:50:01.645588 kubelet[2405]: E0123 18:50:01.644666 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://77.42.79.158:6443/api/v1/namespaces/default/events\": dial tcp 77.42.79.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-7-efa5270b02.188d70c0b77f3337 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-7-efa5270b02,UID:ci-4459-2-3-7-efa5270b02,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-7-efa5270b02,},FirstTimestamp:2026-01-23 18:50:01.641071415 +0000 UTC m=+0.366832183,LastTimestamp:2026-01-23 18:50:01.641071415 +0000 UTC m=+0.366832183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-7-efa5270b02,}" Jan 23 18:50:01.646205 kubelet[2405]: I0123 18:50:01.646189 2405 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:50:01.647072 kubelet[2405]: I0123 18:50:01.647060 2405 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:50:01.652039 kubelet[2405]: I0123 18:50:01.652018 2405 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:50:01.652116 kubelet[2405]: I0123 18:50:01.652107 2405 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:50:01.652309 kubelet[2405]: I0123 18:50:01.652300 2405 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:50:01.652509 kubelet[2405]: I0123 18:50:01.652499 2405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:50:01.654912 kubelet[2405]: I0123 18:50:01.654901 2405 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:50:01.655022 kubelet[2405]: I0123 18:50:01.655015 2405 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:50:01.655084 kubelet[2405]: I0123 18:50:01.655078 2405 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:50:01.655345 kubelet[2405]: E0123 18:50:01.655331 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.79.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:50:01.655505 kubelet[2405]: E0123 18:50:01.655493 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:01.655587 kubelet[2405]: E0123 18:50:01.655575 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": dial tcp 77.42.79.158:6443: connect: connection refused" interval="200ms" Jan 23 18:50:01.656442 kubelet[2405]: E0123 18:50:01.656039 2405 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:50:01.656442 kubelet[2405]: I0123 18:50:01.656142 2405 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:50:01.656442 kubelet[2405]: I0123 18:50:01.656192 2405 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:50:01.656829 kubelet[2405]: I0123 18:50:01.656820 2405 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:50:01.676737 kubelet[2405]: I0123 18:50:01.676712 2405 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:50:01.679124 kubelet[2405]: I0123 18:50:01.679113 2405 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:50:01.679176 kubelet[2405]: I0123 18:50:01.679170 2405 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:50:01.679218 kubelet[2405]: I0123 18:50:01.679212 2405 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:50:01.679301 kubelet[2405]: E0123 18:50:01.679289 2405 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:50:01.682057 kubelet[2405]: E0123 18:50:01.682043 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.79.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:50:01.691497 kubelet[2405]: I0123 18:50:01.691487 2405 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:50:01.691556 kubelet[2405]: I0123 18:50:01.691549 2405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:50:01.691593 kubelet[2405]: I0123 18:50:01.691587 2405 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:01.693264 kubelet[2405]: I0123 18:50:01.693253 2405 policy_none.go:49] "None policy: Start" Jan 23 18:50:01.693517 kubelet[2405]: I0123 18:50:01.693346 2405 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:50:01.693517 kubelet[2405]: I0123 18:50:01.693358 2405 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:50:01.694689 kubelet[2405]: I0123 18:50:01.694680 2405 policy_none.go:47] "Start" Jan 23 18:50:01.699752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:50:01.713515 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:50:01.718028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:50:01.727529 kubelet[2405]: E0123 18:50:01.727515 2405 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:50:01.727718 kubelet[2405]: I0123 18:50:01.727709 2405 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:50:01.727787 kubelet[2405]: I0123 18:50:01.727754 2405 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:50:01.728224 kubelet[2405]: I0123 18:50:01.728213 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:50:01.729759 kubelet[2405]: E0123 18:50:01.729748 2405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:50:01.730144 kubelet[2405]: E0123 18:50:01.730135 2405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:01.802581 systemd[1]: Created slice kubepods-burstable-pod826fad8ef188df654393bdaedb58829c.slice - libcontainer container kubepods-burstable-pod826fad8ef188df654393bdaedb58829c.slice. Jan 23 18:50:01.821554 kubelet[2405]: E0123 18:50:01.819441 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.825486 systemd[1]: Created slice kubepods-burstable-pod8bcf172927dc2212e6065674975a6c96.slice - libcontainer container kubepods-burstable-pod8bcf172927dc2212e6065674975a6c96.slice. Jan 23 18:50:01.830782 kubelet[2405]: I0123 18:50:01.830708 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.831936 kubelet[2405]: E0123 18:50:01.831512 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.832215 kubelet[2405]: E0123 18:50:01.832122 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.79.158:6443/api/v1/nodes\": dial tcp 77.42.79.158:6443: connect: connection refused" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.834151 systemd[1]: Created slice kubepods-burstable-podf1a1634393cc4c978623aff1140a44a0.slice - libcontainer container kubepods-burstable-podf1a1634393cc4c978623aff1140a44a0.slice. Jan 23 18:50:01.837425 kubelet[2405]: E0123 18:50:01.837375 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856576 kubelet[2405]: E0123 18:50:01.856425 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": dial tcp 77.42.79.158:6443: connect: connection refused" interval="400ms" Jan 23 18:50:01.856695 kubelet[2405]: I0123 18:50:01.856655 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856763 kubelet[2405]: I0123 18:50:01.856706 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856763 kubelet[2405]: I0123 18:50:01.856749 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a1634393cc4c978623aff1140a44a0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-7-efa5270b02\" (UID: \"f1a1634393cc4c978623aff1140a44a0\") " pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856838 kubelet[2405]: I0123 18:50:01.856774 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856838 kubelet[2405]: I0123 18:50:01.856798 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.856838 kubelet[2405]: I0123 18:50:01.856819 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.857040 kubelet[2405]: I0123 18:50:01.856840 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.857040 kubelet[2405]: I0123 18:50:01.856861 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:01.857040 kubelet[2405]: I0123 18:50:01.856882 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:02.035915 kubelet[2405]: I0123 18:50:02.035480 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:02.036168 kubelet[2405]: E0123 18:50:02.036006 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.79.158:6443/api/v1/nodes\": dial tcp 77.42.79.158:6443: connect: connection refused" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:02.125104 containerd[1626]: time="2026-01-23T18:50:02.124771636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-7-efa5270b02,Uid:826fad8ef188df654393bdaedb58829c,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:02.138009 containerd[1626]: time="2026-01-23T18:50:02.137778631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-7-efa5270b02,Uid:8bcf172927dc2212e6065674975a6c96,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:02.141413 containerd[1626]: time="2026-01-23T18:50:02.141355070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-7-efa5270b02,Uid:f1a1634393cc4c978623aff1140a44a0,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:02.258048 kubelet[2405]: E0123 18:50:02.257903 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": dial tcp 77.42.79.158:6443: connect: connection refused" interval="800ms" Jan 23 18:50:02.439499 kubelet[2405]: I0123 18:50:02.439438 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:02.440905 kubelet[2405]: E0123 18:50:02.440856 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.79.158:6443/api/v1/nodes\": dial tcp 77.42.79.158:6443: connect: connection refused" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:02.617062 kubelet[2405]: E0123 18:50:02.616989 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.79.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:50:02.718752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692269331.mount: Deactivated successfully. Jan 23 18:50:02.727837 containerd[1626]: time="2026-01-23T18:50:02.726876548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:02.729402 containerd[1626]: time="2026-01-23T18:50:02.729356820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:02.731577 containerd[1626]: time="2026-01-23T18:50:02.731506045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 23 18:50:02.732560 containerd[1626]: time="2026-01-23T18:50:02.732488547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:50:02.735386 containerd[1626]: time="2026-01-23T18:50:02.735343657Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:02.737665 containerd[1626]: time="2026-01-23T18:50:02.737444383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:50:02.737665 containerd[1626]: time="2026-01-23T18:50:02.737584612Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:02.743197 containerd[1626]: time="2026-01-23T18:50:02.743151701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:02.744297 containerd[1626]: time="2026-01-23T18:50:02.744067959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 600.227948ms" Jan 23 18:50:02.748636 containerd[1626]: time="2026-01-23T18:50:02.748553528Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 620.976422ms" Jan 23 18:50:02.760618 containerd[1626]: time="2026-01-23T18:50:02.760571961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 620.642895ms" Jan 23 18:50:02.794302 containerd[1626]: time="2026-01-23T18:50:02.793868691Z" level=info msg="connecting to shim ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7" address="unix:///run/containerd/s/5535c248588a96f7a54b4bdc01ed573964fc7a943c5c7ca3fd6228112bd0bacc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:02.817734 containerd[1626]: time="2026-01-23T18:50:02.817695335Z" level=info msg="connecting to shim 36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca" address="unix:///run/containerd/s/e0f82a9d18204a0b2aadce23c018449ec33bba614fdbbdedd81cbcf90c305a68" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:02.839436 containerd[1626]: time="2026-01-23T18:50:02.838870565Z" level=info msg="connecting to shim 4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413" address="unix:///run/containerd/s/594d35ef6fe5d31d4559ef20f207b3e78d56af895c6370e900540c25f11ca20d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:02.842595 systemd[1]: Started cri-containerd-ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7.scope - libcontainer container ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7. Jan 23 18:50:02.848376 kubelet[2405]: E0123 18:50:02.848345 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.79.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-7-efa5270b02&limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:50:02.877497 systemd[1]: Started cri-containerd-36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca.scope - libcontainer container 36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca. Jan 23 18:50:02.907698 systemd[1]: Started cri-containerd-4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413.scope - libcontainer container 4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413. Jan 23 18:50:02.947569 containerd[1626]: time="2026-01-23T18:50:02.947489648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-7-efa5270b02,Uid:f1a1634393cc4c978623aff1140a44a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7\"" Jan 23 18:50:02.953551 containerd[1626]: time="2026-01-23T18:50:02.952955290Z" level=info msg="CreateContainer within sandbox \"ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:50:02.954148 containerd[1626]: time="2026-01-23T18:50:02.954118694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-7-efa5270b02,Uid:826fad8ef188df654393bdaedb58829c,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca\"" Jan 23 18:50:02.957657 containerd[1626]: time="2026-01-23T18:50:02.957639705Z" level=info msg="CreateContainer within sandbox \"36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:50:02.965662 containerd[1626]: time="2026-01-23T18:50:02.965644186Z" level=info msg="Container d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:02.967734 containerd[1626]: time="2026-01-23T18:50:02.967631026Z" level=info msg="Container e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:02.971219 containerd[1626]: time="2026-01-23T18:50:02.971088233Z" level=info msg="CreateContainer within sandbox \"ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313\"" Jan 23 18:50:02.971735 containerd[1626]: time="2026-01-23T18:50:02.971670583Z" level=info msg="StartContainer for \"d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313\"" Jan 23 18:50:02.972482 containerd[1626]: time="2026-01-23T18:50:02.972454620Z" level=info msg="connecting to shim d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313" address="unix:///run/containerd/s/5535c248588a96f7a54b4bdc01ed573964fc7a943c5c7ca3fd6228112bd0bacc" protocol=ttrpc version=3 Jan 23 18:50:02.975732 containerd[1626]: time="2026-01-23T18:50:02.975656352Z" level=info msg="CreateContainer within sandbox \"36b78459c6cbaa87c624fa74bd16023817f971081cce1b849f2ba4a435b4c9ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f\"" Jan 23 18:50:02.975930 containerd[1626]: time="2026-01-23T18:50:02.975911557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-7-efa5270b02,Uid:8bcf172927dc2212e6065674975a6c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413\"" Jan 23 18:50:02.976478 containerd[1626]: time="2026-01-23T18:50:02.976274796Z" level=info msg="StartContainer for \"e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f\"" Jan 23 18:50:02.977623 containerd[1626]: time="2026-01-23T18:50:02.977453046Z" level=info msg="connecting to shim e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f" address="unix:///run/containerd/s/e0f82a9d18204a0b2aadce23c018449ec33bba614fdbbdedd81cbcf90c305a68" protocol=ttrpc version=3 Jan 23 18:50:02.979065 containerd[1626]: time="2026-01-23T18:50:02.979050493Z" level=info msg="CreateContainer within sandbox \"4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:50:02.989215 containerd[1626]: time="2026-01-23T18:50:02.989188042Z" level=info msg="Container 3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:02.996840 containerd[1626]: time="2026-01-23T18:50:02.996821285Z" level=info msg="CreateContainer within sandbox \"4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb\"" Jan 23 18:50:02.997232 containerd[1626]: time="2026-01-23T18:50:02.997220426Z" level=info msg="StartContainer for \"3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb\"" Jan 23 18:50:02.997969 containerd[1626]: time="2026-01-23T18:50:02.997951875Z" level=info msg="connecting to shim 3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb" address="unix:///run/containerd/s/594d35ef6fe5d31d4559ef20f207b3e78d56af895c6370e900540c25f11ca20d" protocol=ttrpc version=3 Jan 23 18:50:02.998580 systemd[1]: Started cri-containerd-d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313.scope - libcontainer container d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313. Jan 23 18:50:03.001919 systemd[1]: Started cri-containerd-e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f.scope - libcontainer container e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f. Jan 23 18:50:03.025361 systemd[1]: Started cri-containerd-3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb.scope - libcontainer container 3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb. Jan 23 18:50:03.048711 kubelet[2405]: E0123 18:50:03.048679 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.79.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:50:03.058362 kubelet[2405]: E0123 18:50:03.058334 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": dial tcp 77.42.79.158:6443: connect: connection refused" interval="1.6s" Jan 23 18:50:03.068597 containerd[1626]: time="2026-01-23T18:50:03.068518620Z" level=info msg="StartContainer for \"d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313\" returns successfully" Jan 23 18:50:03.074794 containerd[1626]: time="2026-01-23T18:50:03.074776213Z" level=info msg="StartContainer for \"e65e45fcbfbaf24ab3b3909393dbdcb872b793e37e0d483a46f3aef91e36e03f\" returns successfully" Jan 23 18:50:03.082338 kubelet[2405]: E0123 18:50:03.081885 2405 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.79.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.79.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:50:03.109707 containerd[1626]: time="2026-01-23T18:50:03.109656585Z" level=info msg="StartContainer for \"3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb\" returns successfully" Jan 23 18:50:03.242604 kubelet[2405]: I0123 18:50:03.242512 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:03.690588 kubelet[2405]: E0123 18:50:03.690560 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:03.694526 kubelet[2405]: E0123 18:50:03.694438 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:03.696943 kubelet[2405]: E0123 18:50:03.696924 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.661282 kubelet[2405]: E0123 18:50:04.661225 2405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.702530 kubelet[2405]: E0123 18:50:04.702419 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.703773 kubelet[2405]: E0123 18:50:04.703166 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.703966 kubelet[2405]: E0123 18:50:04.703919 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.806999 kubelet[2405]: I0123 18:50:04.806826 2405 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:04.806999 kubelet[2405]: E0123 18:50:04.806879 2405 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-7-efa5270b02\": node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:04.840686 kubelet[2405]: E0123 18:50:04.840634 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:04.940980 kubelet[2405]: E0123 18:50:04.940807 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.042000 kubelet[2405]: E0123 18:50:05.041926 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.142297 kubelet[2405]: E0123 18:50:05.142217 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.243321 kubelet[2405]: E0123 18:50:05.243147 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.343933 kubelet[2405]: E0123 18:50:05.343861 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.445019 kubelet[2405]: E0123 18:50:05.444953 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.546025 kubelet[2405]: E0123 18:50:05.545857 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.646640 kubelet[2405]: E0123 18:50:05.646505 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.704768 kubelet[2405]: E0123 18:50:05.704709 2405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-7-efa5270b02\" not found" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:05.746879 kubelet[2405]: E0123 18:50:05.746814 2405 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-7-efa5270b02\" not found" Jan 23 18:50:05.856185 kubelet[2405]: I0123 18:50:05.855925 2405 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:05.868205 kubelet[2405]: I0123 18:50:05.868155 2405 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:05.876169 kubelet[2405]: I0123 18:50:05.876100 2405 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:06.165555 kubelet[2405]: I0123 18:50:06.165506 2405 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:06.174565 kubelet[2405]: E0123 18:50:06.174532 2405 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:06.636548 kubelet[2405]: I0123 18:50:06.636231 2405 apiserver.go:52] "Watching apiserver" Jan 23 18:50:06.655296 kubelet[2405]: I0123 18:50:06.655222 2405 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:50:06.758711 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-7.scope)... Jan 23 18:50:06.758738 systemd[1]: Reloading... Jan 23 18:50:06.909341 zram_generator::config[2730]: No configuration found. Jan 23 18:50:07.122869 systemd[1]: Reloading finished in 363 ms. Jan 23 18:50:07.152059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:07.169628 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:50:07.169832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:07.169865 systemd[1]: kubelet.service: Consumed 819ms CPU time, 124.4M memory peak. Jan 23 18:50:07.173405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:07.372126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:07.385434 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:50:07.443623 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:50:07.443623 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:07.444010 kubelet[2784]: I0123 18:50:07.443708 2784 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:50:07.451074 kubelet[2784]: I0123 18:50:07.451040 2784 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:50:07.451074 kubelet[2784]: I0123 18:50:07.451070 2784 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:50:07.451175 kubelet[2784]: I0123 18:50:07.451110 2784 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:50:07.451175 kubelet[2784]: I0123 18:50:07.451120 2784 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:50:07.451470 kubelet[2784]: I0123 18:50:07.451449 2784 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:50:07.453388 kubelet[2784]: I0123 18:50:07.453359 2784 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:50:07.456762 kubelet[2784]: I0123 18:50:07.456484 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:50:07.460551 kubelet[2784]: I0123 18:50:07.460530 2784 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:50:07.467093 kubelet[2784]: I0123 18:50:07.467058 2784 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:50:07.468266 kubelet[2784]: I0123 18:50:07.467792 2784 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:50:07.468266 kubelet[2784]: I0123 18:50:07.467820 2784 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-7-efa5270b02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:50:07.468266 kubelet[2784]: I0123 18:50:07.467919 2784 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:50:07.468266 kubelet[2784]: I0123 18:50:07.467925 2784 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:50:07.468395 kubelet[2784]: I0123 18:50:07.467943 2784 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:50:07.468615 kubelet[2784]: I0123 18:50:07.468591 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:07.468876 kubelet[2784]: I0123 18:50:07.468857 2784 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:50:07.468895 kubelet[2784]: I0123 18:50:07.468889 2784 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:50:07.468935 kubelet[2784]: I0123 18:50:07.468919 2784 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:50:07.468973 kubelet[2784]: I0123 18:50:07.468957 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:50:07.476085 kubelet[2784]: I0123 18:50:07.474099 2784 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:50:07.476085 kubelet[2784]: I0123 18:50:07.474435 2784 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:50:07.476085 kubelet[2784]: I0123 18:50:07.474450 2784 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:50:07.478064 kubelet[2784]: I0123 18:50:07.478042 2784 server.go:1262] "Started kubelet" Jan 23 18:50:07.479084 kubelet[2784]: I0123 18:50:07.479049 2784 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:50:07.479137 kubelet[2784]: I0123 18:50:07.479110 2784 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:50:07.479206 kubelet[2784]: I0123 18:50:07.479198 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:50:07.483142 kubelet[2784]: I0123 18:50:07.483069 2784 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:50:07.483296 kubelet[2784]: I0123 18:50:07.483228 2784 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:50:07.484716 kubelet[2784]: I0123 18:50:07.484690 2784 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:50:07.490286 kubelet[2784]: I0123 18:50:07.489502 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:50:07.491469 kubelet[2784]: I0123 18:50:07.491460 2784 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:50:07.494285 kubelet[2784]: I0123 18:50:07.494270 2784 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:50:07.494621 kubelet[2784]: I0123 18:50:07.494612 2784 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:50:07.496913 kubelet[2784]: I0123 18:50:07.496881 2784 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:50:07.498374 kubelet[2784]: I0123 18:50:07.498363 2784 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:50:07.498450 kubelet[2784]: I0123 18:50:07.498444 2784 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:50:07.498507 kubelet[2784]: I0123 18:50:07.498501 2784 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:50:07.498583 kubelet[2784]: E0123 18:50:07.498573 2784 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:50:07.504176 kubelet[2784]: I0123 18:50:07.504141 2784 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:50:07.504176 kubelet[2784]: I0123 18:50:07.504177 2784 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:50:07.504385 kubelet[2784]: I0123 18:50:07.504357 2784 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:50:07.547855 kubelet[2784]: I0123 18:50:07.547819 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:50:07.547855 kubelet[2784]: I0123 18:50:07.547831 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:50:07.547855 kubelet[2784]: I0123 18:50:07.547845 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:07.547983 kubelet[2784]: I0123 18:50:07.547931 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:50:07.547983 kubelet[2784]: I0123 18:50:07.547938 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:50:07.547983 kubelet[2784]: I0123 18:50:07.547948 2784 policy_none.go:49] "None policy: Start" Jan 23 18:50:07.547983 kubelet[2784]: I0123 18:50:07.547955 2784 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:50:07.547983 kubelet[2784]: I0123 18:50:07.547962 2784 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:50:07.548059 kubelet[2784]: I0123 18:50:07.548017 2784 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 18:50:07.548059 kubelet[2784]: I0123 18:50:07.548022 2784 policy_none.go:47] "Start" Jan 23 18:50:07.551805 kubelet[2784]: E0123 18:50:07.551783 2784 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:50:07.552931 kubelet[2784]: I0123 18:50:07.552544 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:50:07.552931 kubelet[2784]: I0123 18:50:07.552580 2784 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:50:07.552931 kubelet[2784]: I0123 18:50:07.552836 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:50:07.554782 kubelet[2784]: E0123 18:50:07.554749 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:50:07.600289 kubelet[2784]: I0123 18:50:07.600213 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.600642 kubelet[2784]: I0123 18:50:07.600584 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.600897 kubelet[2784]: I0123 18:50:07.600866 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.609538 kubelet[2784]: E0123 18:50:07.609494 2784 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.610005 kubelet[2784]: E0123 18:50:07.609936 2784 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.610453 kubelet[2784]: E0123 18:50:07.610329 2784 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.659409 kubelet[2784]: I0123 18:50:07.659357 2784 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.670566 kubelet[2784]: I0123 18:50:07.670499 2784 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.671410 kubelet[2784]: I0123 18:50:07.671366 2784 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.695857 kubelet[2784]: I0123 18:50:07.695782 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a1634393cc4c978623aff1140a44a0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-7-efa5270b02\" (UID: \"f1a1634393cc4c978623aff1140a44a0\") " pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.695857 kubelet[2784]: I0123 18:50:07.695830 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696054 kubelet[2784]: I0123 18:50:07.695867 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696054 kubelet[2784]: I0123 18:50:07.695901 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696054 kubelet[2784]: I0123 18:50:07.695951 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/826fad8ef188df654393bdaedb58829c-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" (UID: \"826fad8ef188df654393bdaedb58829c\") " pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696054 kubelet[2784]: I0123 18:50:07.695984 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696054 kubelet[2784]: I0123 18:50:07.696013 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696329 kubelet[2784]: I0123 18:50:07.696049 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.696329 kubelet[2784]: I0123 18:50:07.696110 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bcf172927dc2212e6065674975a6c96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-7-efa5270b02\" (UID: \"8bcf172927dc2212e6065674975a6c96\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:07.756300 sudo[2822]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 18:50:07.756968 sudo[2822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 18:50:08.107062 sudo[2822]: pam_unix(sudo:session): session closed for user root Jan 23 18:50:08.473948 kubelet[2784]: I0123 18:50:08.473903 2784 apiserver.go:52] "Watching apiserver" Jan 23 18:50:08.495869 kubelet[2784]: I0123 18:50:08.495229 2784 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:50:08.535430 kubelet[2784]: I0123 18:50:08.534606 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:08.536633 kubelet[2784]: I0123 18:50:08.536199 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:08.545141 kubelet[2784]: E0123 18:50:08.545114 2784 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:08.550443 kubelet[2784]: E0123 18:50:08.550185 2784 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-7-efa5270b02\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" Jan 23 18:50:08.590805 kubelet[2784]: I0123 18:50:08.590745 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-7-efa5270b02" podStartSLOduration=3.590726063 podStartE2EDuration="3.590726063s" podCreationTimestamp="2026-01-23 18:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:08.576934966 +0000 UTC m=+1.184889491" watchObservedRunningTime="2026-01-23 18:50:08.590726063 +0000 UTC m=+1.198680588" Jan 23 18:50:08.604748 kubelet[2784]: I0123 18:50:08.604314 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-7-efa5270b02" podStartSLOduration=3.6042871659999998 podStartE2EDuration="3.604287166s" podCreationTimestamp="2026-01-23 18:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:08.591807893 +0000 UTC m=+1.199762418" watchObservedRunningTime="2026-01-23 18:50:08.604287166 +0000 UTC m=+1.212241701" Jan 23 18:50:09.998676 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 23 18:50:10.120652 sshd[1849]: Connection closed by 20.161.92.111 port 36574 Jan 23 18:50:10.121519 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Jan 23 18:50:10.128110 systemd[1]: sshd@6-77.42.79.158:22-20.161.92.111:36574.service: Deactivated successfully. Jan 23 18:50:10.132526 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:50:10.133002 systemd[1]: session-7.scope: Consumed 5.822s CPU time, 274.9M memory peak. Jan 23 18:50:10.136633 systemd-logind[1605]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:50:10.140584 systemd-logind[1605]: Removed session 7. Jan 23 18:50:13.250986 kubelet[2784]: I0123 18:50:13.250924 2784 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:50:13.253595 kubelet[2784]: I0123 18:50:13.252043 2784 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:50:13.253698 containerd[1626]: time="2026-01-23T18:50:13.251579807Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:50:13.929407 kubelet[2784]: I0123 18:50:13.927709 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-7-efa5270b02" podStartSLOduration=8.927685523 podStartE2EDuration="8.927685523s" podCreationTimestamp="2026-01-23 18:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:08.606109988 +0000 UTC m=+1.214064523" watchObservedRunningTime="2026-01-23 18:50:13.927685523 +0000 UTC m=+6.535640048" Jan 23 18:50:13.943641 kubelet[2784]: I0123 18:50:13.943605 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a46ada7c-9491-440b-a693-b2ea2835be7b-xtables-lock\") pod \"kube-proxy-gglzc\" (UID: \"a46ada7c-9491-440b-a693-b2ea2835be7b\") " pod="kube-system/kube-proxy-gglzc" Jan 23 18:50:13.943923 kubelet[2784]: I0123 18:50:13.943897 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84sjv\" (UniqueName: \"kubernetes.io/projected/a46ada7c-9491-440b-a693-b2ea2835be7b-kube-api-access-84sjv\") pod \"kube-proxy-gglzc\" (UID: \"a46ada7c-9491-440b-a693-b2ea2835be7b\") " pod="kube-system/kube-proxy-gglzc" Jan 23 18:50:13.944174 kubelet[2784]: I0123 18:50:13.944140 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a46ada7c-9491-440b-a693-b2ea2835be7b-kube-proxy\") pod \"kube-proxy-gglzc\" (UID: \"a46ada7c-9491-440b-a693-b2ea2835be7b\") " pod="kube-system/kube-proxy-gglzc" Jan 23 18:50:13.944373 kubelet[2784]: I0123 18:50:13.944229 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a46ada7c-9491-440b-a693-b2ea2835be7b-lib-modules\") pod \"kube-proxy-gglzc\" (UID: \"a46ada7c-9491-440b-a693-b2ea2835be7b\") " pod="kube-system/kube-proxy-gglzc" Jan 23 18:50:13.948029 systemd[1]: Created slice kubepods-besteffort-poda46ada7c_9491_440b_a693_b2ea2835be7b.slice - libcontainer container kubepods-besteffort-poda46ada7c_9491_440b_a693_b2ea2835be7b.slice. Jan 23 18:50:13.957927 kubelet[2784]: E0123 18:50:13.957825 2784 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gglzc\" is forbidden: User \"system:node:ci-4459-2-3-7-efa5270b02\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-7-efa5270b02' and this object" podUID="a46ada7c-9491-440b-a693-b2ea2835be7b" pod="kube-system/kube-proxy-gglzc" Jan 23 18:50:13.958077 kubelet[2784]: E0123 18:50:13.957994 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4459-2-3-7-efa5270b02\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-7-efa5270b02' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Jan 23 18:50:13.958077 kubelet[2784]: E0123 18:50:13.958066 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459-2-3-7-efa5270b02\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-7-efa5270b02' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 18:50:14.005926 systemd[1]: Created slice kubepods-burstable-pod6397d374_bcd0_4226_8fe0_e75aa843876b.slice - libcontainer container kubepods-burstable-pod6397d374_bcd0_4226_8fe0_e75aa843876b.slice. Jan 23 18:50:14.047415 kubelet[2784]: I0123 18:50:14.047375 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-bpf-maps\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047415 kubelet[2784]: I0123 18:50:14.047404 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-lib-modules\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047415 kubelet[2784]: I0123 18:50:14.047415 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-config-path\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047415 kubelet[2784]: I0123 18:50:14.047426 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-net\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047437 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-xtables-lock\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047447 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6397d374-bcd0-4226-8fe0-e75aa843876b-clustermesh-secrets\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047463 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdm9\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-kube-api-access-kqdm9\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047474 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-hostproc\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047483 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cni-path\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047595 kubelet[2784]: I0123 18:50:14.047494 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-hubble-tls\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047690 kubelet[2784]: I0123 18:50:14.047517 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-run\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047690 kubelet[2784]: I0123 18:50:14.047527 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-cgroup\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047690 kubelet[2784]: I0123 18:50:14.047537 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-etc-cni-netd\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.047690 kubelet[2784]: I0123 18:50:14.047548 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-kernel\") pod \"cilium-dwcf5\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " pod="kube-system/cilium-dwcf5" Jan 23 18:50:14.466366 systemd[1]: Created slice kubepods-besteffort-pod5e1596e9_7629_4d69_808a_0b1c8219bc4f.slice - libcontainer container kubepods-besteffort-pod5e1596e9_7629_4d69_808a_0b1c8219bc4f.slice. Jan 23 18:50:14.554238 kubelet[2784]: I0123 18:50:14.554156 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e1596e9-7629-4d69-808a-0b1c8219bc4f-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fnnx8\" (UID: \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\") " pod="kube-system/cilium-operator-6f9c7c5859-fnnx8" Jan 23 18:50:14.554238 kubelet[2784]: I0123 18:50:14.554215 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4r9v\" (UniqueName: \"kubernetes.io/projected/5e1596e9-7629-4d69-808a-0b1c8219bc4f-kube-api-access-b4r9v\") pod \"cilium-operator-6f9c7c5859-fnnx8\" (UID: \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\") " pod="kube-system/cilium-operator-6f9c7c5859-fnnx8" Jan 23 18:50:14.708772 update_engine[1609]: I20260123 18:50:14.708585 1609 update_attempter.cc:509] Updating boot flags... Jan 23 18:50:15.056189 kubelet[2784]: E0123 18:50:15.056126 2784 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 18:50:15.056189 kubelet[2784]: E0123 18:50:15.056183 2784 projected.go:196] Error preparing data for projected volume kube-api-access-84sjv for pod kube-system/kube-proxy-gglzc: failed to sync configmap cache: timed out waiting for the condition Jan 23 18:50:15.058021 kubelet[2784]: E0123 18:50:15.056692 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a46ada7c-9491-440b-a693-b2ea2835be7b-kube-api-access-84sjv podName:a46ada7c-9491-440b-a693-b2ea2835be7b nodeName:}" failed. No retries permitted until 2026-01-23 18:50:15.556321684 +0000 UTC m=+8.164276209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-84sjv" (UniqueName: "kubernetes.io/projected/a46ada7c-9491-440b-a693-b2ea2835be7b-kube-api-access-84sjv") pod "kube-proxy-gglzc" (UID: "a46ada7c-9491-440b-a693-b2ea2835be7b") : failed to sync configmap cache: timed out waiting for the condition Jan 23 18:50:15.215907 containerd[1626]: time="2026-01-23T18:50:15.215826538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwcf5,Uid:6397d374-bcd0-4226-8fe0-e75aa843876b,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:15.254053 containerd[1626]: time="2026-01-23T18:50:15.253945750Z" level=info msg="connecting to shim d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:15.297541 systemd[1]: Started cri-containerd-d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20.scope - libcontainer container d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20. Jan 23 18:50:15.352023 containerd[1626]: time="2026-01-23T18:50:15.351806231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwcf5,Uid:6397d374-bcd0-4226-8fe0-e75aa843876b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\"" Jan 23 18:50:15.358238 containerd[1626]: time="2026-01-23T18:50:15.357564154Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 18:50:15.375283 containerd[1626]: time="2026-01-23T18:50:15.375135112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fnnx8,Uid:5e1596e9-7629-4d69-808a-0b1c8219bc4f,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:15.407443 containerd[1626]: time="2026-01-23T18:50:15.407392543Z" level=info msg="connecting to shim d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217" address="unix:///run/containerd/s/ee6ca3f18e3ded64bd882b5abd041abdefb0621efe12274c9ce1679810060231" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:15.460483 systemd[1]: Started cri-containerd-d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217.scope - libcontainer container d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217. Jan 23 18:50:15.564945 containerd[1626]: time="2026-01-23T18:50:15.564879336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fnnx8,Uid:5e1596e9-7629-4d69-808a-0b1c8219bc4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\"" Jan 23 18:50:15.768439 containerd[1626]: time="2026-01-23T18:50:15.768375623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gglzc,Uid:a46ada7c-9491-440b-a693-b2ea2835be7b,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:15.798336 containerd[1626]: time="2026-01-23T18:50:15.797762235Z" level=info msg="connecting to shim 41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3" address="unix:///run/containerd/s/269fff311cab4d7537aa5e6e4d82e1f4ec031af926dcff03b38177348e0f2dfd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:15.848515 systemd[1]: Started cri-containerd-41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3.scope - libcontainer container 41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3. Jan 23 18:50:15.908432 containerd[1626]: time="2026-01-23T18:50:15.908357591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gglzc,Uid:a46ada7c-9491-440b-a693-b2ea2835be7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3\"" Jan 23 18:50:15.917406 containerd[1626]: time="2026-01-23T18:50:15.915940735Z" level=info msg="CreateContainer within sandbox \"41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:50:15.930125 containerd[1626]: time="2026-01-23T18:50:15.930053360Z" level=info msg="Container 6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:15.939840 containerd[1626]: time="2026-01-23T18:50:15.939762273Z" level=info msg="CreateContainer within sandbox \"41844f172973996b4e531a6ca6e6cf9b1c7803974129f3c5bef21be8e9b620f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1\"" Jan 23 18:50:15.940957 containerd[1626]: time="2026-01-23T18:50:15.940908349Z" level=info msg="StartContainer for \"6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1\"" Jan 23 18:50:15.944030 containerd[1626]: time="2026-01-23T18:50:15.943742410Z" level=info msg="connecting to shim 6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1" address="unix:///run/containerd/s/269fff311cab4d7537aa5e6e4d82e1f4ec031af926dcff03b38177348e0f2dfd" protocol=ttrpc version=3 Jan 23 18:50:15.983493 systemd[1]: Started cri-containerd-6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1.scope - libcontainer container 6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1. Jan 23 18:50:16.116732 containerd[1626]: time="2026-01-23T18:50:16.116582863Z" level=info msg="StartContainer for \"6ca3251d817525f3e4deeb82f3020d1b2484645179f70789de630a714f3e27a1\" returns successfully" Jan 23 18:50:16.923130 kubelet[2784]: I0123 18:50:16.923038 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gglzc" podStartSLOduration=3.923015455 podStartE2EDuration="3.923015455s" podCreationTimestamp="2026-01-23 18:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:16.572444372 +0000 UTC m=+9.180398897" watchObservedRunningTime="2026-01-23 18:50:16.923015455 +0000 UTC m=+9.530969980" Jan 23 18:50:20.082164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27893894.mount: Deactivated successfully. Jan 23 18:50:21.509433 containerd[1626]: time="2026-01-23T18:50:21.509342193Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:21.510417 containerd[1626]: time="2026-01-23T18:50:21.510228531Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 18:50:21.511333 containerd[1626]: time="2026-01-23T18:50:21.511305174Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:21.512361 containerd[1626]: time="2026-01-23T18:50:21.512333600Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.154688265s" Jan 23 18:50:21.512407 containerd[1626]: time="2026-01-23T18:50:21.512360578Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 18:50:21.513707 containerd[1626]: time="2026-01-23T18:50:21.513677561Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 18:50:21.516706 containerd[1626]: time="2026-01-23T18:50:21.516660957Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:50:21.525321 containerd[1626]: time="2026-01-23T18:50:21.524897158Z" level=info msg="Container df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:21.528580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202990903.mount: Deactivated successfully. Jan 23 18:50:21.532695 containerd[1626]: time="2026-01-23T18:50:21.532659535Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\"" Jan 23 18:50:21.533358 containerd[1626]: time="2026-01-23T18:50:21.533127597Z" level=info msg="StartContainer for \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\"" Jan 23 18:50:21.533851 containerd[1626]: time="2026-01-23T18:50:21.533835080Z" level=info msg="connecting to shim df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" protocol=ttrpc version=3 Jan 23 18:50:21.551399 systemd[1]: Started cri-containerd-df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282.scope - libcontainer container df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282. Jan 23 18:50:21.577626 containerd[1626]: time="2026-01-23T18:50:21.577575210Z" level=info msg="StartContainer for \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\" returns successfully" Jan 23 18:50:21.588758 systemd[1]: cri-containerd-df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282.scope: Deactivated successfully. Jan 23 18:50:21.590505 containerd[1626]: time="2026-01-23T18:50:21.590472030Z" level=info msg="received container exit event container_id:\"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\" id:\"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\" pid:3228 exited_at:{seconds:1769194221 nanos:590185352}" Jan 23 18:50:21.607515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282-rootfs.mount: Deactivated successfully. Jan 23 18:50:22.592208 containerd[1626]: time="2026-01-23T18:50:22.591511441Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:50:22.626664 containerd[1626]: time="2026-01-23T18:50:22.625475620Z" level=info msg="Container 268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:22.638332 containerd[1626]: time="2026-01-23T18:50:22.638278084Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\"" Jan 23 18:50:22.639099 containerd[1626]: time="2026-01-23T18:50:22.639068282Z" level=info msg="StartContainer for \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\"" Jan 23 18:50:22.641276 containerd[1626]: time="2026-01-23T18:50:22.640955755Z" level=info msg="connecting to shim 268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" protocol=ttrpc version=3 Jan 23 18:50:22.676454 systemd[1]: Started cri-containerd-268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a.scope - libcontainer container 268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a. Jan 23 18:50:22.733772 containerd[1626]: time="2026-01-23T18:50:22.733650325Z" level=info msg="StartContainer for \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\" returns successfully" Jan 23 18:50:22.761822 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:50:22.763301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:50:22.764471 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:50:22.770682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:50:22.772511 containerd[1626]: time="2026-01-23T18:50:22.771459804Z" level=info msg="received container exit event container_id:\"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\" id:\"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\" pid:3278 exited_at:{seconds:1769194222 nanos:768957029}" Jan 23 18:50:22.776159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:50:22.777381 systemd[1]: cri-containerd-268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a.scope: Deactivated successfully. Jan 23 18:50:22.820360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:50:23.598912 containerd[1626]: time="2026-01-23T18:50:23.598760316Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:50:23.622794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a-rootfs.mount: Deactivated successfully. Jan 23 18:50:23.636991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434237891.mount: Deactivated successfully. Jan 23 18:50:23.639087 containerd[1626]: time="2026-01-23T18:50:23.638982524Z" level=info msg="Container d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:23.647324 containerd[1626]: time="2026-01-23T18:50:23.647295936Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\"" Jan 23 18:50:23.648111 containerd[1626]: time="2026-01-23T18:50:23.648085478Z" level=info msg="StartContainer for \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\"" Jan 23 18:50:23.648984 containerd[1626]: time="2026-01-23T18:50:23.648939314Z" level=info msg="connecting to shim d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" protocol=ttrpc version=3 Jan 23 18:50:23.681455 systemd[1]: Started cri-containerd-d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82.scope - libcontainer container d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82. Jan 23 18:50:23.775677 containerd[1626]: time="2026-01-23T18:50:23.775570514Z" level=info msg="StartContainer for \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\" returns successfully" Jan 23 18:50:23.782360 systemd[1]: cri-containerd-d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82.scope: Deactivated successfully. Jan 23 18:50:23.787645 containerd[1626]: time="2026-01-23T18:50:23.787611659Z" level=info msg="received container exit event container_id:\"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\" id:\"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\" pid:3336 exited_at:{seconds:1769194223 nanos:787289983}" Jan 23 18:50:24.005588 containerd[1626]: time="2026-01-23T18:50:24.005533223Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:24.006568 containerd[1626]: time="2026-01-23T18:50:24.006435529Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 18:50:24.007866 containerd[1626]: time="2026-01-23T18:50:24.007830540Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:24.008960 containerd[1626]: time="2026-01-23T18:50:24.008879145Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.495172506s" Jan 23 18:50:24.008960 containerd[1626]: time="2026-01-23T18:50:24.008900213Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 18:50:24.013051 containerd[1626]: time="2026-01-23T18:50:24.013015070Z" level=info msg="CreateContainer within sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 18:50:24.020992 containerd[1626]: time="2026-01-23T18:50:24.020956145Z" level=info msg="Container b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:24.034515 containerd[1626]: time="2026-01-23T18:50:24.034463753Z" level=info msg="CreateContainer within sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\"" Jan 23 18:50:24.035425 containerd[1626]: time="2026-01-23T18:50:24.035371449Z" level=info msg="StartContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\"" Jan 23 18:50:24.036758 containerd[1626]: time="2026-01-23T18:50:24.036686095Z" level=info msg="connecting to shim b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34" address="unix:///run/containerd/s/ee6ca3f18e3ded64bd882b5abd041abdefb0621efe12274c9ce1679810060231" protocol=ttrpc version=3 Jan 23 18:50:24.063448 systemd[1]: Started cri-containerd-b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34.scope - libcontainer container b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34. Jan 23 18:50:24.120910 containerd[1626]: time="2026-01-23T18:50:24.120804226Z" level=info msg="StartContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" returns successfully" Jan 23 18:50:24.624241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82-rootfs.mount: Deactivated successfully. Jan 23 18:50:24.640160 containerd[1626]: time="2026-01-23T18:50:24.640123761Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:50:24.655854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236647243.mount: Deactivated successfully. Jan 23 18:50:24.658542 containerd[1626]: time="2026-01-23T18:50:24.658500122Z" level=info msg="Container dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:24.664792 containerd[1626]: time="2026-01-23T18:50:24.664767916Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\"" Jan 23 18:50:24.665209 containerd[1626]: time="2026-01-23T18:50:24.665186866Z" level=info msg="StartContainer for \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\"" Jan 23 18:50:24.665754 containerd[1626]: time="2026-01-23T18:50:24.665736487Z" level=info msg="connecting to shim dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" protocol=ttrpc version=3 Jan 23 18:50:24.691499 systemd[1]: Started cri-containerd-dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4.scope - libcontainer container dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4. Jan 23 18:50:24.760809 containerd[1626]: time="2026-01-23T18:50:24.760775581Z" level=info msg="StartContainer for \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\" returns successfully" Jan 23 18:50:24.770826 systemd[1]: cri-containerd-dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4.scope: Deactivated successfully. Jan 23 18:50:24.772608 containerd[1626]: time="2026-01-23T18:50:24.771290152Z" level=info msg="received container exit event container_id:\"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\" id:\"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\" pid:3410 exited_at:{seconds:1769194224 nanos:771035240}" Jan 23 18:50:25.620732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4-rootfs.mount: Deactivated successfully. Jan 23 18:50:25.668350 containerd[1626]: time="2026-01-23T18:50:25.668214442Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:50:25.699847 containerd[1626]: time="2026-01-23T18:50:25.699089146Z" level=info msg="Container c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:25.710785 containerd[1626]: time="2026-01-23T18:50:25.710722562Z" level=info msg="CreateContainer within sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\"" Jan 23 18:50:25.712712 containerd[1626]: time="2026-01-23T18:50:25.712492081Z" level=info msg="StartContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\"" Jan 23 18:50:25.713965 containerd[1626]: time="2026-01-23T18:50:25.713920304Z" level=info msg="connecting to shim c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2" address="unix:///run/containerd/s/50b0e54cb872a7080474e47bfae6b86a4db411dbc831fc7ea3ab0dba28740ba7" protocol=ttrpc version=3 Jan 23 18:50:25.724194 kubelet[2784]: I0123 18:50:25.723836 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fnnx8" podStartSLOduration=3.285470763 podStartE2EDuration="11.723820329s" podCreationTimestamp="2026-01-23 18:50:14 +0000 UTC" firstStartedPulling="2026-01-23 18:50:15.571389548 +0000 UTC m=+8.179344073" lastFinishedPulling="2026-01-23 18:50:24.009739154 +0000 UTC m=+16.617693639" observedRunningTime="2026-01-23 18:50:24.78844668 +0000 UTC m=+17.396401165" watchObservedRunningTime="2026-01-23 18:50:25.723820329 +0000 UTC m=+18.331774844" Jan 23 18:50:25.743342 systemd[1]: Started cri-containerd-c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2.scope - libcontainer container c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2. Jan 23 18:50:25.785416 containerd[1626]: time="2026-01-23T18:50:25.785383519Z" level=info msg="StartContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" returns successfully" Jan 23 18:50:25.952856 kubelet[2784]: I0123 18:50:25.951973 2784 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:50:25.995529 systemd[1]: Created slice kubepods-burstable-pod273fc85d_add8_4334_ba70_cc8c9467621d.slice - libcontainer container kubepods-burstable-pod273fc85d_add8_4334_ba70_cc8c9467621d.slice. Jan 23 18:50:26.008713 systemd[1]: Created slice kubepods-burstable-pod8fbbe659_29f3_4a3c_bf02_b377e03009df.slice - libcontainer container kubepods-burstable-pod8fbbe659_29f3_4a3c_bf02_b377e03009df.slice. Jan 23 18:50:26.036199 kubelet[2784]: I0123 18:50:26.036156 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/273fc85d-add8-4334-ba70-cc8c9467621d-config-volume\") pod \"coredns-66bc5c9577-ljcfc\" (UID: \"273fc85d-add8-4334-ba70-cc8c9467621d\") " pod="kube-system/coredns-66bc5c9577-ljcfc" Jan 23 18:50:26.036609 kubelet[2784]: I0123 18:50:26.036525 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbbe659-29f3-4a3c-bf02-b377e03009df-config-volume\") pod \"coredns-66bc5c9577-plf9g\" (UID: \"8fbbe659-29f3-4a3c-bf02-b377e03009df\") " pod="kube-system/coredns-66bc5c9577-plf9g" Jan 23 18:50:26.036969 kubelet[2784]: I0123 18:50:26.036945 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvqx\" (UniqueName: \"kubernetes.io/projected/8fbbe659-29f3-4a3c-bf02-b377e03009df-kube-api-access-hjvqx\") pod \"coredns-66bc5c9577-plf9g\" (UID: \"8fbbe659-29f3-4a3c-bf02-b377e03009df\") " pod="kube-system/coredns-66bc5c9577-plf9g" Jan 23 18:50:26.037177 kubelet[2784]: I0123 18:50:26.037149 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79hh\" (UniqueName: \"kubernetes.io/projected/273fc85d-add8-4334-ba70-cc8c9467621d-kube-api-access-l79hh\") pod \"coredns-66bc5c9577-ljcfc\" (UID: \"273fc85d-add8-4334-ba70-cc8c9467621d\") " pod="kube-system/coredns-66bc5c9577-ljcfc" Jan 23 18:50:26.306808 containerd[1626]: time="2026-01-23T18:50:26.305570107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljcfc,Uid:273fc85d-add8-4334-ba70-cc8c9467621d,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:26.317617 containerd[1626]: time="2026-01-23T18:50:26.317475488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-plf9g,Uid:8fbbe659-29f3-4a3c-bf02-b377e03009df,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:26.687590 kubelet[2784]: I0123 18:50:26.687346 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dwcf5" podStartSLOduration=7.531031388 podStartE2EDuration="13.687320504s" podCreationTimestamp="2026-01-23 18:50:13 +0000 UTC" firstStartedPulling="2026-01-23 18:50:15.357017335 +0000 UTC m=+7.964971860" lastFinishedPulling="2026-01-23 18:50:21.513306491 +0000 UTC m=+14.121260976" observedRunningTime="2026-01-23 18:50:26.683802755 +0000 UTC m=+19.291757280" watchObservedRunningTime="2026-01-23 18:50:26.687320504 +0000 UTC m=+19.295275039" Jan 23 18:50:27.951383 systemd-networkd[1482]: cilium_host: Link UP Jan 23 18:50:27.951742 systemd-networkd[1482]: cilium_net: Link UP Jan 23 18:50:27.952065 systemd-networkd[1482]: cilium_net: Gained carrier Jan 23 18:50:27.956600 systemd-networkd[1482]: cilium_host: Gained carrier Jan 23 18:50:27.959809 systemd-networkd[1482]: cilium_net: Gained IPv6LL Jan 23 18:50:28.082420 systemd-networkd[1482]: cilium_host: Gained IPv6LL Jan 23 18:50:28.154543 systemd-networkd[1482]: cilium_vxlan: Link UP Jan 23 18:50:28.154774 systemd-networkd[1482]: cilium_vxlan: Gained carrier Jan 23 18:50:28.435665 kernel: NET: Registered PF_ALG protocol family Jan 23 18:50:29.525047 systemd-networkd[1482]: lxc_health: Link UP Jan 23 18:50:29.530551 systemd-networkd[1482]: lxc_health: Gained carrier Jan 23 18:50:29.855963 systemd-networkd[1482]: lxc0a4a74975517: Link UP Jan 23 18:50:29.863281 kernel: eth0: renamed from tmpd2b14 Jan 23 18:50:29.868661 systemd-networkd[1482]: lxc0a4a74975517: Gained carrier Jan 23 18:50:29.893296 kernel: eth0: renamed from tmp7d905 Jan 23 18:50:29.900080 systemd-networkd[1482]: lxc65bf333aa5ce: Link UP Jan 23 18:50:29.902400 systemd-networkd[1482]: lxc65bf333aa5ce: Gained carrier Jan 23 18:50:30.010392 systemd-networkd[1482]: cilium_vxlan: Gained IPv6LL Jan 23 18:50:30.844329 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jan 23 18:50:31.483610 systemd-networkd[1482]: lxc65bf333aa5ce: Gained IPv6LL Jan 23 18:50:31.738595 systemd-networkd[1482]: lxc0a4a74975517: Gained IPv6LL Jan 23 18:50:32.450717 containerd[1626]: time="2026-01-23T18:50:32.450658469Z" level=info msg="connecting to shim d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a" address="unix:///run/containerd/s/bbf025fe7b6bdba7db67f10d4dc60c4f15f098357a7ff0e5eb5bc47fe710e6c9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:32.477424 systemd[1]: Started cri-containerd-d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a.scope - libcontainer container d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a. Jan 23 18:50:32.539276 containerd[1626]: time="2026-01-23T18:50:32.538679607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljcfc,Uid:273fc85d-add8-4334-ba70-cc8c9467621d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a\"" Jan 23 18:50:32.545263 containerd[1626]: time="2026-01-23T18:50:32.545173379Z" level=info msg="CreateContainer within sandbox \"d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:32.555522 containerd[1626]: time="2026-01-23T18:50:32.555235597Z" level=info msg="Container eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:32.563135 containerd[1626]: time="2026-01-23T18:50:32.563102209Z" level=info msg="CreateContainer within sandbox \"d2b14fb62579a4cce1f9869ec9e393a7c35434df6dd441e6af9f1fe93520630a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f\"" Jan 23 18:50:32.564211 containerd[1626]: time="2026-01-23T18:50:32.564162913Z" level=info msg="StartContainer for \"eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f\"" Jan 23 18:50:32.565410 containerd[1626]: time="2026-01-23T18:50:32.565384970Z" level=info msg="connecting to shim eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f" address="unix:///run/containerd/s/bbf025fe7b6bdba7db67f10d4dc60c4f15f098357a7ff0e5eb5bc47fe710e6c9" protocol=ttrpc version=3 Jan 23 18:50:32.581840 containerd[1626]: time="2026-01-23T18:50:32.581769589Z" level=info msg="connecting to shim 7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372" address="unix:///run/containerd/s/6b35c82d2f07030d1a00cda69a8f78d1b6240894725859bab37072d877bf8fe3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:32.588368 systemd[1]: Started cri-containerd-eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f.scope - libcontainer container eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f. Jan 23 18:50:32.605366 systemd[1]: Started cri-containerd-7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372.scope - libcontainer container 7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372. Jan 23 18:50:32.634428 containerd[1626]: time="2026-01-23T18:50:32.634386656Z" level=info msg="StartContainer for \"eafc68a550d096a3597210272cb8a4a8e77afac3943103e30c4a65e8253ded1f\" returns successfully" Jan 23 18:50:32.659725 containerd[1626]: time="2026-01-23T18:50:32.659665973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-plf9g,Uid:8fbbe659-29f3-4a3c-bf02-b377e03009df,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372\"" Jan 23 18:50:32.664752 containerd[1626]: time="2026-01-23T18:50:32.664404127Z" level=info msg="CreateContainer within sandbox \"7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:32.672321 containerd[1626]: time="2026-01-23T18:50:32.672303306Z" level=info msg="Container 6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:32.680102 containerd[1626]: time="2026-01-23T18:50:32.680085442Z" level=info msg="CreateContainer within sandbox \"7d905dec94febd152a103cb48d193e4a8a7b77c52aee8110755835504e340372\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a\"" Jan 23 18:50:32.680483 containerd[1626]: time="2026-01-23T18:50:32.680469912Z" level=info msg="StartContainer for \"6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a\"" Jan 23 18:50:32.681451 containerd[1626]: time="2026-01-23T18:50:32.681237182Z" level=info msg="connecting to shim 6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a" address="unix:///run/containerd/s/6b35c82d2f07030d1a00cda69a8f78d1b6240894725859bab37072d877bf8fe3" protocol=ttrpc version=3 Jan 23 18:50:32.702788 systemd[1]: Started cri-containerd-6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a.scope - libcontainer container 6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a. Jan 23 18:50:32.734053 containerd[1626]: time="2026-01-23T18:50:32.734002812Z" level=info msg="StartContainer for \"6453c3180046d9d3107ec900029cc78bc63ccbb87b39dff53e2ddd99449f2a7a\" returns successfully" Jan 23 18:50:33.699704 kubelet[2784]: I0123 18:50:33.699610 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ljcfc" podStartSLOduration=19.699557662 podStartE2EDuration="19.699557662s" podCreationTimestamp="2026-01-23 18:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:32.68975512 +0000 UTC m=+25.297709605" watchObservedRunningTime="2026-01-23 18:50:33.699557662 +0000 UTC m=+26.307512187" Jan 23 18:50:33.701854 kubelet[2784]: I0123 18:50:33.700351 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-plf9g" podStartSLOduration=19.700341803 podStartE2EDuration="19.700341803s" podCreationTimestamp="2026-01-23 18:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:33.69782444 +0000 UTC m=+26.305778964" watchObservedRunningTime="2026-01-23 18:50:33.700341803 +0000 UTC m=+26.308296328" Jan 23 18:51:44.815655 systemd[1]: Started sshd@7-77.42.79.158:22-20.161.92.111:39684.service - OpenSSH per-connection server daemon (20.161.92.111:39684). Jan 23 18:51:45.607337 sshd[4132]: Accepted publickey for core from 20.161.92.111 port 39684 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:45.609373 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:45.617578 systemd-logind[1605]: New session 8 of user core. Jan 23 18:51:45.634537 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:51:46.255503 sshd[4135]: Connection closed by 20.161.92.111 port 39684 Jan 23 18:51:46.257526 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:46.265172 systemd[1]: sshd@7-77.42.79.158:22-20.161.92.111:39684.service: Deactivated successfully. Jan 23 18:51:46.269857 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:51:46.272173 systemd-logind[1605]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:51:46.274737 systemd-logind[1605]: Removed session 8. Jan 23 18:51:51.390858 systemd[1]: Started sshd@8-77.42.79.158:22-20.161.92.111:39700.service - OpenSSH per-connection server daemon (20.161.92.111:39700). Jan 23 18:51:52.175105 sshd[4150]: Accepted publickey for core from 20.161.92.111 port 39700 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:52.177694 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:52.187011 systemd-logind[1605]: New session 9 of user core. Jan 23 18:51:52.192443 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:51:52.804147 sshd[4153]: Connection closed by 20.161.92.111 port 39700 Jan 23 18:51:52.805110 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:52.813386 systemd[1]: sshd@8-77.42.79.158:22-20.161.92.111:39700.service: Deactivated successfully. Jan 23 18:51:52.817079 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:51:52.818824 systemd-logind[1605]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:51:52.822439 systemd-logind[1605]: Removed session 9. Jan 23 18:51:57.951628 systemd[1]: Started sshd@9-77.42.79.158:22-20.161.92.111:42156.service - OpenSSH per-connection server daemon (20.161.92.111:42156). Jan 23 18:51:58.737338 sshd[4166]: Accepted publickey for core from 20.161.92.111 port 42156 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:58.739819 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:58.750538 systemd-logind[1605]: New session 10 of user core. Jan 23 18:51:58.755517 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:51:59.373652 sshd[4169]: Connection closed by 20.161.92.111 port 42156 Jan 23 18:51:59.375572 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:59.382945 systemd-logind[1605]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:51:59.386819 systemd[1]: sshd@9-77.42.79.158:22-20.161.92.111:42156.service: Deactivated successfully. Jan 23 18:51:59.391635 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:51:59.394073 systemd-logind[1605]: Removed session 10. Jan 23 18:51:59.512048 systemd[1]: Started sshd@10-77.42.79.158:22-20.161.92.111:42166.service - OpenSSH per-connection server daemon (20.161.92.111:42166). Jan 23 18:52:00.298331 sshd[4182]: Accepted publickey for core from 20.161.92.111 port 42166 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:00.300193 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:00.309242 systemd-logind[1605]: New session 11 of user core. Jan 23 18:52:00.316465 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:52:00.993917 sshd[4185]: Connection closed by 20.161.92.111 port 42166 Jan 23 18:52:00.994904 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:01.000933 systemd[1]: sshd@10-77.42.79.158:22-20.161.92.111:42166.service: Deactivated successfully. Jan 23 18:52:01.005030 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:52:01.007877 systemd-logind[1605]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:52:01.011317 systemd-logind[1605]: Removed session 11. Jan 23 18:52:01.131968 systemd[1]: Started sshd@11-77.42.79.158:22-20.161.92.111:42174.service - OpenSSH per-connection server daemon (20.161.92.111:42174). Jan 23 18:52:01.920664 sshd[4195]: Accepted publickey for core from 20.161.92.111 port 42174 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:01.923510 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:01.932968 systemd-logind[1605]: New session 12 of user core. Jan 23 18:52:01.941476 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:52:02.550661 sshd[4198]: Connection closed by 20.161.92.111 port 42174 Jan 23 18:52:02.552556 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:02.559677 systemd-logind[1605]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:52:02.561093 systemd[1]: sshd@11-77.42.79.158:22-20.161.92.111:42174.service: Deactivated successfully. Jan 23 18:52:02.566505 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:52:02.571407 systemd-logind[1605]: Removed session 12. Jan 23 18:52:07.688565 systemd[1]: Started sshd@12-77.42.79.158:22-20.161.92.111:55482.service - OpenSSH per-connection server daemon (20.161.92.111:55482). Jan 23 18:52:08.486467 sshd[4213]: Accepted publickey for core from 20.161.92.111 port 55482 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:08.489320 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:08.501209 systemd-logind[1605]: New session 13 of user core. Jan 23 18:52:08.508492 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:52:09.131456 sshd[4216]: Connection closed by 20.161.92.111 port 55482 Jan 23 18:52:09.133561 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:09.140626 systemd-logind[1605]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:52:09.142089 systemd[1]: sshd@12-77.42.79.158:22-20.161.92.111:55482.service: Deactivated successfully. Jan 23 18:52:09.146691 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:52:09.149373 systemd-logind[1605]: Removed session 13. Jan 23 18:52:09.268689 systemd[1]: Started sshd@13-77.42.79.158:22-20.161.92.111:55486.service - OpenSSH per-connection server daemon (20.161.92.111:55486). Jan 23 18:52:10.058808 sshd[4228]: Accepted publickey for core from 20.161.92.111 port 55486 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:10.061348 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:10.070343 systemd-logind[1605]: New session 14 of user core. Jan 23 18:52:10.077458 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:52:10.670024 sshd[4231]: Connection closed by 20.161.92.111 port 55486 Jan 23 18:52:10.670976 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:10.679353 systemd[1]: sshd@13-77.42.79.158:22-20.161.92.111:55486.service: Deactivated successfully. Jan 23 18:52:10.684184 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:52:10.686127 systemd-logind[1605]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:52:10.689568 systemd-logind[1605]: Removed session 14. Jan 23 18:52:10.807981 systemd[1]: Started sshd@14-77.42.79.158:22-20.161.92.111:55488.service - OpenSSH per-connection server daemon (20.161.92.111:55488). Jan 23 18:52:11.602729 sshd[4241]: Accepted publickey for core from 20.161.92.111 port 55488 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:11.605213 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:11.613757 systemd-logind[1605]: New session 15 of user core. Jan 23 18:52:11.621449 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:52:12.756507 sshd[4244]: Connection closed by 20.161.92.111 port 55488 Jan 23 18:52:12.757824 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:12.763557 systemd[1]: sshd@14-77.42.79.158:22-20.161.92.111:55488.service: Deactivated successfully. Jan 23 18:52:12.767238 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:52:12.770421 systemd-logind[1605]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:52:12.773789 systemd-logind[1605]: Removed session 15. Jan 23 18:52:12.892600 systemd[1]: Started sshd@15-77.42.79.158:22-20.161.92.111:44930.service - OpenSSH per-connection server daemon (20.161.92.111:44930). Jan 23 18:52:13.680873 sshd[4259]: Accepted publickey for core from 20.161.92.111 port 44930 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:13.684229 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:13.695005 systemd-logind[1605]: New session 16 of user core. Jan 23 18:52:13.703703 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:52:14.467317 sshd[4264]: Connection closed by 20.161.92.111 port 44930 Jan 23 18:52:14.469608 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:14.476285 systemd-logind[1605]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:52:14.477645 systemd[1]: sshd@15-77.42.79.158:22-20.161.92.111:44930.service: Deactivated successfully. Jan 23 18:52:14.481012 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:52:14.484704 systemd-logind[1605]: Removed session 16. Jan 23 18:52:14.603780 systemd[1]: Started sshd@16-77.42.79.158:22-20.161.92.111:44932.service - OpenSSH per-connection server daemon (20.161.92.111:44932). Jan 23 18:52:15.392686 sshd[4274]: Accepted publickey for core from 20.161.92.111 port 44932 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:15.394774 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:15.402043 systemd-logind[1605]: New session 17 of user core. Jan 23 18:52:15.413475 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:52:16.024830 sshd[4277]: Connection closed by 20.161.92.111 port 44932 Jan 23 18:52:16.026600 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:16.034334 systemd[1]: sshd@16-77.42.79.158:22-20.161.92.111:44932.service: Deactivated successfully. Jan 23 18:52:16.038086 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:52:16.040184 systemd-logind[1605]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:52:16.042835 systemd-logind[1605]: Removed session 17. Jan 23 18:52:21.162670 systemd[1]: Started sshd@17-77.42.79.158:22-20.161.92.111:44942.service - OpenSSH per-connection server daemon (20.161.92.111:44942). Jan 23 18:52:21.948130 sshd[4294]: Accepted publickey for core from 20.161.92.111 port 44942 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:21.950695 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:21.959373 systemd-logind[1605]: New session 18 of user core. Jan 23 18:52:21.967464 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:52:22.595588 sshd[4297]: Connection closed by 20.161.92.111 port 44942 Jan 23 18:52:22.596710 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:22.607455 systemd[1]: sshd@17-77.42.79.158:22-20.161.92.111:44942.service: Deactivated successfully. Jan 23 18:52:22.610993 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:52:22.612637 systemd-logind[1605]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:52:22.615690 systemd-logind[1605]: Removed session 18. Jan 23 18:52:27.731894 systemd[1]: Started sshd@18-77.42.79.158:22-20.161.92.111:45780.service - OpenSSH per-connection server daemon (20.161.92.111:45780). Jan 23 18:52:28.528885 sshd[4309]: Accepted publickey for core from 20.161.92.111 port 45780 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:28.531444 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:28.540313 systemd-logind[1605]: New session 19 of user core. Jan 23 18:52:28.545484 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:52:29.134512 sshd[4312]: Connection closed by 20.161.92.111 port 45780 Jan 23 18:52:29.136557 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:29.142189 systemd[1]: sshd@18-77.42.79.158:22-20.161.92.111:45780.service: Deactivated successfully. Jan 23 18:52:29.146420 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:52:29.149529 systemd-logind[1605]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:52:29.151891 systemd-logind[1605]: Removed session 19. Jan 23 18:52:29.283444 systemd[1]: Started sshd@19-77.42.79.158:22-20.161.92.111:45796.service - OpenSSH per-connection server daemon (20.161.92.111:45796). Jan 23 18:52:30.069168 sshd[4323]: Accepted publickey for core from 20.161.92.111 port 45796 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:30.071822 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:30.081331 systemd-logind[1605]: New session 20 of user core. Jan 23 18:52:30.086466 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:52:31.820959 containerd[1626]: time="2026-01-23T18:52:31.820595694Z" level=info msg="StopContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" with timeout 30 (s)" Jan 23 18:52:31.822834 containerd[1626]: time="2026-01-23T18:52:31.822309671Z" level=info msg="Stop container \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" with signal terminated" Jan 23 18:52:31.871591 systemd[1]: cri-containerd-b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34.scope: Deactivated successfully. Jan 23 18:52:31.875098 containerd[1626]: time="2026-01-23T18:52:31.874097742Z" level=info msg="received container exit event container_id:\"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" id:\"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" pid:3380 exited_at:{seconds:1769194351 nanos:872440115}" Jan 23 18:52:31.876882 containerd[1626]: time="2026-01-23T18:52:31.876808495Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:52:31.890141 containerd[1626]: time="2026-01-23T18:52:31.890099897Z" level=info msg="StopContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" with timeout 2 (s)" Jan 23 18:52:31.890485 containerd[1626]: time="2026-01-23T18:52:31.890390574Z" level=info msg="Stop container \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" with signal terminated" Jan 23 18:52:31.898760 systemd-networkd[1482]: lxc_health: Link DOWN Jan 23 18:52:31.899060 systemd-networkd[1482]: lxc_health: Lost carrier Jan 23 18:52:31.924576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34-rootfs.mount: Deactivated successfully. Jan 23 18:52:31.926856 systemd[1]: cri-containerd-c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2.scope: Deactivated successfully. Jan 23 18:52:31.927276 systemd[1]: cri-containerd-c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2.scope: Consumed 6.083s CPU time, 123.8M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:52:31.929134 containerd[1626]: time="2026-01-23T18:52:31.929019791Z" level=info msg="received container exit event container_id:\"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" id:\"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" pid:3446 exited_at:{seconds:1769194351 nanos:928863823}" Jan 23 18:52:31.947585 containerd[1626]: time="2026-01-23T18:52:31.947465911Z" level=info msg="StopContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" returns successfully" Jan 23 18:52:31.948749 containerd[1626]: time="2026-01-23T18:52:31.948687995Z" level=info msg="StopPodSandbox for \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\"" Jan 23 18:52:31.948793 containerd[1626]: time="2026-01-23T18:52:31.948762203Z" level=info msg="Container to stop \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.954522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2-rootfs.mount: Deactivated successfully. Jan 23 18:52:31.963424 containerd[1626]: time="2026-01-23T18:52:31.963361968Z" level=info msg="StopContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" returns successfully" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963754602Z" level=info msg="StopPodSandbox for \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\"" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963791631Z" level=info msg="Container to stop \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963798831Z" level=info msg="Container to stop \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963804941Z" level=info msg="Container to stop \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963811531Z" level=info msg="Container to stop \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.963915 containerd[1626]: time="2026-01-23T18:52:31.963817421Z" level=info msg="Container to stop \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:52:31.964782 systemd[1]: cri-containerd-d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217.scope: Deactivated successfully. Jan 23 18:52:31.972891 containerd[1626]: time="2026-01-23T18:52:31.972825105Z" level=info msg="received sandbox exit event container_id:\"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" id:\"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" exit_status:137 exited_at:{seconds:1769194351 nanos:972555658}" monitor_name=podsandbox Jan 23 18:52:31.977258 systemd[1]: cri-containerd-d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20.scope: Deactivated successfully. Jan 23 18:52:31.985799 containerd[1626]: time="2026-01-23T18:52:31.985710744Z" level=info msg="received sandbox exit event container_id:\"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" id:\"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" exit_status:137 exited_at:{seconds:1769194351 nanos:985485617}" monitor_name=podsandbox Jan 23 18:52:31.998612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217-rootfs.mount: Deactivated successfully. Jan 23 18:52:32.005083 containerd[1626]: time="2026-01-23T18:52:32.005053891Z" level=info msg="shim disconnected" id=d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217 namespace=k8s.io Jan 23 18:52:32.005362 containerd[1626]: time="2026-01-23T18:52:32.005348297Z" level=warning msg="cleaning up after shim disconnected" id=d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217 namespace=k8s.io Jan 23 18:52:32.005504 containerd[1626]: time="2026-01-23T18:52:32.005445075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:52:32.011324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20-rootfs.mount: Deactivated successfully. Jan 23 18:52:32.017067 containerd[1626]: time="2026-01-23T18:52:32.016782026Z" level=info msg="shim disconnected" id=d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20 namespace=k8s.io Jan 23 18:52:32.017067 containerd[1626]: time="2026-01-23T18:52:32.016804855Z" level=warning msg="cleaning up after shim disconnected" id=d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20 namespace=k8s.io Jan 23 18:52:32.017067 containerd[1626]: time="2026-01-23T18:52:32.016810875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:52:32.023077 containerd[1626]: time="2026-01-23T18:52:32.021587438Z" level=info msg="TearDown network for sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" successfully" Jan 23 18:52:32.022842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217-shm.mount: Deactivated successfully. Jan 23 18:52:32.023415 containerd[1626]: time="2026-01-23T18:52:32.023400682Z" level=info msg="StopPodSandbox for \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" returns successfully" Jan 23 18:52:32.023806 containerd[1626]: time="2026-01-23T18:52:32.023360342Z" level=info msg="received sandbox container exit event sandbox_id:\"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" exit_status:137 exited_at:{seconds:1769194351 nanos:972555658}" monitor_name=criService Jan 23 18:52:32.031969 containerd[1626]: time="2026-01-23T18:52:32.031892852Z" level=info msg="received sandbox container exit event sandbox_id:\"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" exit_status:137 exited_at:{seconds:1769194351 nanos:985485617}" monitor_name=criService Jan 23 18:52:32.032133 containerd[1626]: time="2026-01-23T18:52:32.032116179Z" level=info msg="TearDown network for sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" successfully" Jan 23 18:52:32.032185 containerd[1626]: time="2026-01-23T18:52:32.032176398Z" level=info msg="StopPodSandbox for \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" returns successfully" Jan 23 18:52:32.217218 kubelet[2784]: I0123 18:52:32.217145 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-etc-cni-netd\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.217218 kubelet[2784]: I0123 18:52:32.217204 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-lib-modules\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.218031 kubelet[2784]: I0123 18:52:32.217515 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.218031 kubelet[2784]: I0123 18:52:32.217577 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.218031 kubelet[2784]: I0123 18:52:32.217238 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-config-path\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.218031 kubelet[2784]: I0123 18:52:32.217634 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-xtables-lock\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218443 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-cgroup\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218499 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-kernel\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218525 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-bpf-maps\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218557 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6397d374-bcd0-4226-8fe0-e75aa843876b-clustermesh-secrets\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218581 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cni-path\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219295 kubelet[2784]: I0123 18:52:32.218604 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-net\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218628 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdm9\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-kube-api-access-kqdm9\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218651 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-hubble-tls\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218679 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4r9v\" (UniqueName: \"kubernetes.io/projected/5e1596e9-7629-4d69-808a-0b1c8219bc4f-kube-api-access-b4r9v\") pod \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\" (UID: \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218705 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-hostproc\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218734 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e1596e9-7629-4d69-808a-0b1c8219bc4f-cilium-config-path\") pod \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\" (UID: \"5e1596e9-7629-4d69-808a-0b1c8219bc4f\") " Jan 23 18:52:32.219580 kubelet[2784]: I0123 18:52:32.218758 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-run\") pod \"6397d374-bcd0-4226-8fe0-e75aa843876b\" (UID: \"6397d374-bcd0-4226-8fe0-e75aa843876b\") " Jan 23 18:52:32.219824 kubelet[2784]: I0123 18:52:32.218821 2784 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-etc-cni-netd\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.219824 kubelet[2784]: I0123 18:52:32.218836 2784 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-lib-modules\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.219824 kubelet[2784]: I0123 18:52:32.218888 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.219824 kubelet[2784]: I0123 18:52:32.218928 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.219824 kubelet[2784]: I0123 18:52:32.218953 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.220019 kubelet[2784]: I0123 18:52:32.218976 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.220019 kubelet[2784]: I0123 18:52:32.219000 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.222385 kubelet[2784]: I0123 18:52:32.222341 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.222611 kubelet[2784]: I0123 18:52:32.222402 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.225351 kubelet[2784]: I0123 18:52:32.225295 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:52:32.225562 kubelet[2784]: I0123 18:52:32.225370 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:52:32.228743 kubelet[2784]: I0123 18:52:32.228705 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6397d374-bcd0-4226-8fe0-e75aa843876b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:52:32.234555 kubelet[2784]: I0123 18:52:32.234504 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:52:32.237105 kubelet[2784]: I0123 18:52:32.236332 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-kube-api-access-kqdm9" (OuterVolumeSpecName: "kube-api-access-kqdm9") pod "6397d374-bcd0-4226-8fe0-e75aa843876b" (UID: "6397d374-bcd0-4226-8fe0-e75aa843876b"). InnerVolumeSpecName "kube-api-access-kqdm9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:52:32.237198 kubelet[2784]: I0123 18:52:32.237169 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e1596e9-7629-4d69-808a-0b1c8219bc4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e1596e9-7629-4d69-808a-0b1c8219bc4f" (UID: "5e1596e9-7629-4d69-808a-0b1c8219bc4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:52:32.238290 kubelet[2784]: I0123 18:52:32.238183 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1596e9-7629-4d69-808a-0b1c8219bc4f-kube-api-access-b4r9v" (OuterVolumeSpecName: "kube-api-access-b4r9v") pod "5e1596e9-7629-4d69-808a-0b1c8219bc4f" (UID: "5e1596e9-7629-4d69-808a-0b1c8219bc4f"). InnerVolumeSpecName "kube-api-access-b4r9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319448 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-config-path\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319498 2784 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-xtables-lock\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319517 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-cgroup\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319531 2784 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-kernel\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319547 2784 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-bpf-maps\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319562 2784 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6397d374-bcd0-4226-8fe0-e75aa843876b-clustermesh-secrets\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319575 2784 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cni-path\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.319709 kubelet[2784]: I0123 18:52:32.319588 2784 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-host-proc-sys-net\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319602 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqdm9\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-kube-api-access-kqdm9\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319617 2784 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6397d374-bcd0-4226-8fe0-e75aa843876b-hubble-tls\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319631 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4r9v\" (UniqueName: \"kubernetes.io/projected/5e1596e9-7629-4d69-808a-0b1c8219bc4f-kube-api-access-b4r9v\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319644 2784 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-hostproc\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319658 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e1596e9-7629-4d69-808a-0b1c8219bc4f-cilium-config-path\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.320182 kubelet[2784]: I0123 18:52:32.319671 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6397d374-bcd0-4226-8fe0-e75aa843876b-cilium-run\") on node \"ci-4459-2-3-7-efa5270b02\" DevicePath \"\"" Jan 23 18:52:32.602673 kubelet[2784]: E0123 18:52:32.602463 2784 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:52:32.926778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20-shm.mount: Deactivated successfully. Jan 23 18:52:32.928753 systemd[1]: var-lib-kubelet-pods-5e1596e9\x2d7629\x2d4d69\x2d808a\x2d0b1c8219bc4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4r9v.mount: Deactivated successfully. Jan 23 18:52:32.928906 systemd[1]: var-lib-kubelet-pods-6397d374\x2dbcd0\x2d4226\x2d8fe0\x2de75aa843876b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqdm9.mount: Deactivated successfully. Jan 23 18:52:32.929077 systemd[1]: var-lib-kubelet-pods-6397d374\x2dbcd0\x2d4226\x2d8fe0\x2de75aa843876b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 18:52:32.929211 systemd[1]: var-lib-kubelet-pods-6397d374\x2dbcd0\x2d4226\x2d8fe0\x2de75aa843876b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 18:52:33.002796 kubelet[2784]: I0123 18:52:33.002721 2784 scope.go:117] "RemoveContainer" containerID="c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2" Jan 23 18:52:33.019847 containerd[1626]: time="2026-01-23T18:52:33.018397574Z" level=info msg="RemoveContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\"" Jan 23 18:52:33.020690 systemd[1]: Removed slice kubepods-besteffort-pod5e1596e9_7629_4d69_808a_0b1c8219bc4f.slice - libcontainer container kubepods-besteffort-pod5e1596e9_7629_4d69_808a_0b1c8219bc4f.slice. Jan 23 18:52:33.027376 systemd[1]: Removed slice kubepods-burstable-pod6397d374_bcd0_4226_8fe0_e75aa843876b.slice - libcontainer container kubepods-burstable-pod6397d374_bcd0_4226_8fe0_e75aa843876b.slice. Jan 23 18:52:33.027641 systemd[1]: kubepods-burstable-pod6397d374_bcd0_4226_8fe0_e75aa843876b.slice: Consumed 6.227s CPU time, 124.2M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:52:33.033979 containerd[1626]: time="2026-01-23T18:52:33.033897213Z" level=info msg="RemoveContainer for \"c55ec77c34821adab91c33fb0234b68aceddab1105b414bd683c324bd336beb2\" returns successfully" Jan 23 18:52:33.034301 kubelet[2784]: I0123 18:52:33.034211 2784 scope.go:117] "RemoveContainer" containerID="dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4" Jan 23 18:52:33.037809 containerd[1626]: time="2026-01-23T18:52:33.037761538Z" level=info msg="RemoveContainer for \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\"" Jan 23 18:52:33.059596 containerd[1626]: time="2026-01-23T18:52:33.059542207Z" level=info msg="RemoveContainer for \"dcfc7a5f83f4c2c407b49ddd372bbe063a85e75d81d2af470ce9a2a5eafa10c4\" returns successfully" Jan 23 18:52:33.060094 kubelet[2784]: I0123 18:52:33.059966 2784 scope.go:117] "RemoveContainer" containerID="d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82" Jan 23 18:52:33.069220 containerd[1626]: time="2026-01-23T18:52:33.069068362Z" level=info msg="RemoveContainer for \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\"" Jan 23 18:52:33.078170 containerd[1626]: time="2026-01-23T18:52:33.078096543Z" level=info msg="RemoveContainer for \"d62b81f9e2ab659e9989cecf263133b35ea46871ad1c201ef44fcc4a6f3ebe82\" returns successfully" Jan 23 18:52:33.078458 kubelet[2784]: I0123 18:52:33.078427 2784 scope.go:117] "RemoveContainer" containerID="268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a" Jan 23 18:52:33.080698 containerd[1626]: time="2026-01-23T18:52:33.080667176Z" level=info msg="RemoveContainer for \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\"" Jan 23 18:52:33.086393 containerd[1626]: time="2026-01-23T18:52:33.086351906Z" level=info msg="RemoveContainer for \"268910e222932ed1b0a5f15b99eccb12d9a5f8aae324a48e0fadbf9f3a464c4a\" returns successfully" Jan 23 18:52:33.086619 kubelet[2784]: I0123 18:52:33.086578 2784 scope.go:117] "RemoveContainer" containerID="df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282" Jan 23 18:52:33.090101 containerd[1626]: time="2026-01-23T18:52:33.089358962Z" level=info msg="RemoveContainer for \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\"" Jan 23 18:52:33.094999 containerd[1626]: time="2026-01-23T18:52:33.094966982Z" level=info msg="RemoveContainer for \"df36ce700809a9cc843a6e68b9dcb6af7491195952239bdbc9b6f61807807282\" returns successfully" Jan 23 18:52:33.095453 kubelet[2784]: I0123 18:52:33.095429 2784 scope.go:117] "RemoveContainer" containerID="b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34" Jan 23 18:52:33.097538 containerd[1626]: time="2026-01-23T18:52:33.097491906Z" level=info msg="RemoveContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\"" Jan 23 18:52:33.102665 containerd[1626]: time="2026-01-23T18:52:33.102617604Z" level=info msg="RemoveContainer for \"b1f57eea97635962a9d61a6b11e1a841229059e5d239a2667dea795048517c34\" returns successfully" Jan 23 18:52:33.502990 kubelet[2784]: I0123 18:52:33.502927 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1596e9-7629-4d69-808a-0b1c8219bc4f" path="/var/lib/kubelet/pods/5e1596e9-7629-4d69-808a-0b1c8219bc4f/volumes" Jan 23 18:52:33.504007 kubelet[2784]: I0123 18:52:33.503963 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6397d374-bcd0-4226-8fe0-e75aa843876b" path="/var/lib/kubelet/pods/6397d374-bcd0-4226-8fe0-e75aa843876b/volumes" Jan 23 18:52:33.863993 sshd[4326]: Connection closed by 20.161.92.111 port 45796 Jan 23 18:52:33.865574 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:33.873417 systemd[1]: sshd@19-77.42.79.158:22-20.161.92.111:45796.service: Deactivated successfully. Jan 23 18:52:33.877750 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:52:33.880235 systemd-logind[1605]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:52:33.882944 systemd-logind[1605]: Removed session 20. Jan 23 18:52:34.005725 systemd[1]: Started sshd@20-77.42.79.158:22-20.161.92.111:52766.service - OpenSSH per-connection server daemon (20.161.92.111:52766). Jan 23 18:52:34.789229 sshd[4469]: Accepted publickey for core from 20.161.92.111 port 52766 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:34.792049 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:34.802069 systemd-logind[1605]: New session 21 of user core. Jan 23 18:52:34.807530 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:52:36.227224 systemd[1]: Created slice kubepods-burstable-pod6b6fed7c_a451_4096_96cd_e5d7a463843a.slice - libcontainer container kubepods-burstable-pod6b6fed7c_a451_4096_96cd_e5d7a463843a.slice. Jan 23 18:52:36.328885 sshd[4472]: Connection closed by 20.161.92.111 port 52766 Jan 23 18:52:36.330585 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:36.337036 systemd[1]: sshd@20-77.42.79.158:22-20.161.92.111:52766.service: Deactivated successfully. Jan 23 18:52:36.340813 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:52:36.343607 systemd-logind[1605]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:52:36.347125 kubelet[2784]: I0123 18:52:36.346908 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b6fed7c-a451-4096-96cd-e5d7a463843a-hubble-tls\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347346 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-etc-cni-netd\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347385 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-host-proc-sys-kernel\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347410 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-cilium-cgroup\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347431 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-xtables-lock\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347452 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-lib-modules\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348110 kubelet[2784]: I0123 18:52:36.347474 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-cilium-run\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347497 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-bpf-maps\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347518 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-cni-path\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347540 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b6fed7c-a451-4096-96cd-e5d7a463843a-clustermesh-secrets\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347561 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b6fed7c-a451-4096-96cd-e5d7a463843a-cilium-config-path\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347584 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629km\" (UniqueName: \"kubernetes.io/projected/6b6fed7c-a451-4096-96cd-e5d7a463843a-kube-api-access-629km\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349346 kubelet[2784]: I0123 18:52:36.347613 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6b6fed7c-a451-4096-96cd-e5d7a463843a-cilium-ipsec-secrets\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.348405 systemd-logind[1605]: Removed session 21. Jan 23 18:52:36.349695 kubelet[2784]: I0123 18:52:36.347637 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-hostproc\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.349695 kubelet[2784]: I0123 18:52:36.347657 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b6fed7c-a451-4096-96cd-e5d7a463843a-host-proc-sys-net\") pod \"cilium-x6wfx\" (UID: \"6b6fed7c-a451-4096-96cd-e5d7a463843a\") " pod="kube-system/cilium-x6wfx" Jan 23 18:52:36.471376 systemd[1]: Started sshd@21-77.42.79.158:22-20.161.92.111:52772.service - OpenSSH per-connection server daemon (20.161.92.111:52772). Jan 23 18:52:36.537643 containerd[1626]: time="2026-01-23T18:52:36.537599287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6wfx,Uid:6b6fed7c-a451-4096-96cd-e5d7a463843a,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:36.557276 containerd[1626]: time="2026-01-23T18:52:36.557160603Z" level=info msg="connecting to shim e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:36.597568 systemd[1]: Started cri-containerd-e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0.scope - libcontainer container e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0. Jan 23 18:52:36.647057 containerd[1626]: time="2026-01-23T18:52:36.646755510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6wfx,Uid:6b6fed7c-a451-4096-96cd-e5d7a463843a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\"" Jan 23 18:52:36.655377 containerd[1626]: time="2026-01-23T18:52:36.655317366Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:52:36.665309 containerd[1626]: time="2026-01-23T18:52:36.664584841Z" level=info msg="Container 3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:36.671102 containerd[1626]: time="2026-01-23T18:52:36.671048207Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21\"" Jan 23 18:52:36.673310 containerd[1626]: time="2026-01-23T18:52:36.671960803Z" level=info msg="StartContainer for \"3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21\"" Jan 23 18:52:36.674498 containerd[1626]: time="2026-01-23T18:52:36.674467498Z" level=info msg="connecting to shim 3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" protocol=ttrpc version=3 Jan 23 18:52:36.713515 systemd[1]: Started cri-containerd-3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21.scope - libcontainer container 3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21. Jan 23 18:52:36.775380 containerd[1626]: time="2026-01-23T18:52:36.775334192Z" level=info msg="StartContainer for \"3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21\" returns successfully" Jan 23 18:52:36.797336 systemd[1]: cri-containerd-3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21.scope: Deactivated successfully. Jan 23 18:52:36.800581 containerd[1626]: time="2026-01-23T18:52:36.800516195Z" level=info msg="received container exit event container_id:\"3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21\" id:\"3362652cdc41acf9333dce0768cfc301c5438e73a2b3713e502f22bc5a224d21\" pid:4554 exited_at:{seconds:1769194356 nanos:799998352}" Jan 23 18:52:37.041226 containerd[1626]: time="2026-01-23T18:52:37.041150253Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:52:37.053525 containerd[1626]: time="2026-01-23T18:52:37.052326900Z" level=info msg="Container 49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:37.063065 containerd[1626]: time="2026-01-23T18:52:37.062991134Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426\"" Jan 23 18:52:37.065016 containerd[1626]: time="2026-01-23T18:52:37.064960545Z" level=info msg="StartContainer for \"49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426\"" Jan 23 18:52:37.068375 containerd[1626]: time="2026-01-23T18:52:37.068326466Z" level=info msg="connecting to shim 49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" protocol=ttrpc version=3 Jan 23 18:52:37.118552 systemd[1]: Started cri-containerd-49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426.scope - libcontainer container 49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426. Jan 23 18:52:37.170540 containerd[1626]: time="2026-01-23T18:52:37.170508022Z" level=info msg="StartContainer for \"49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426\" returns successfully" Jan 23 18:52:37.178479 systemd[1]: cri-containerd-49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426.scope: Deactivated successfully. Jan 23 18:52:37.180442 containerd[1626]: time="2026-01-23T18:52:37.180337438Z" level=info msg="received container exit event container_id:\"49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426\" id:\"49a2c25a4ed7858e8c8b28946088df772d3d93affd051efaad4422f7b00c6426\" pid:4598 exited_at:{seconds:1769194357 nanos:180117081}" Jan 23 18:52:37.284739 sshd[4489]: Accepted publickey for core from 20.161.92.111 port 52772 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:37.288410 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:37.297342 systemd-logind[1605]: New session 22 of user core. Jan 23 18:52:37.304491 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:52:37.604824 kubelet[2784]: E0123 18:52:37.604608 2784 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:52:37.816475 sshd[4629]: Connection closed by 20.161.92.111 port 52772 Jan 23 18:52:37.818573 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:37.827051 systemd[1]: sshd@21-77.42.79.158:22-20.161.92.111:52772.service: Deactivated successfully. Jan 23 18:52:37.831107 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:52:37.832751 systemd-logind[1605]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:52:37.836099 systemd-logind[1605]: Removed session 22. Jan 23 18:52:37.958227 systemd[1]: Started sshd@22-77.42.79.158:22-20.161.92.111:52774.service - OpenSSH per-connection server daemon (20.161.92.111:52774). Jan 23 18:52:38.045073 containerd[1626]: time="2026-01-23T18:52:38.044428068Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:52:38.069456 containerd[1626]: time="2026-01-23T18:52:38.069409130Z" level=info msg="Container 3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:38.080965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699500667.mount: Deactivated successfully. Jan 23 18:52:38.091916 containerd[1626]: time="2026-01-23T18:52:38.091845030Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836\"" Jan 23 18:52:38.094778 containerd[1626]: time="2026-01-23T18:52:38.094717338Z" level=info msg="StartContainer for \"3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836\"" Jan 23 18:52:38.100887 containerd[1626]: time="2026-01-23T18:52:38.100739549Z" level=info msg="connecting to shim 3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" protocol=ttrpc version=3 Jan 23 18:52:38.141595 systemd[1]: Started cri-containerd-3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836.scope - libcontainer container 3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836. Jan 23 18:52:38.262734 containerd[1626]: time="2026-01-23T18:52:38.262605769Z" level=info msg="StartContainer for \"3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836\" returns successfully" Jan 23 18:52:38.269884 systemd[1]: cri-containerd-3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836.scope: Deactivated successfully. Jan 23 18:52:38.276281 containerd[1626]: time="2026-01-23T18:52:38.276196699Z" level=info msg="received container exit event container_id:\"3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836\" id:\"3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836\" pid:4651 exited_at:{seconds:1769194358 nanos:275891173}" Jan 23 18:52:38.319370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a2aca5d7f7426dabb00afe791537a3d79f757ea06d76a0302d2a759abcae836-rootfs.mount: Deactivated successfully. Jan 23 18:52:38.756313 sshd[4636]: Accepted publickey for core from 20.161.92.111 port 52774 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:38.759796 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:38.769294 systemd-logind[1605]: New session 23 of user core. Jan 23 18:52:38.778503 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:52:39.055704 containerd[1626]: time="2026-01-23T18:52:39.055436172Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:52:39.077569 containerd[1626]: time="2026-01-23T18:52:39.077331148Z" level=info msg="Container 562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:39.090846 containerd[1626]: time="2026-01-23T18:52:39.090807929Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86\"" Jan 23 18:52:39.092120 containerd[1626]: time="2026-01-23T18:52:39.092080140Z" level=info msg="StartContainer for \"562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86\"" Jan 23 18:52:39.094844 containerd[1626]: time="2026-01-23T18:52:39.094814980Z" level=info msg="connecting to shim 562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" protocol=ttrpc version=3 Jan 23 18:52:39.121413 systemd[1]: Started cri-containerd-562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86.scope - libcontainer container 562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86. Jan 23 18:52:39.184408 systemd[1]: cri-containerd-562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86.scope: Deactivated successfully. Jan 23 18:52:39.188698 containerd[1626]: time="2026-01-23T18:52:39.188638592Z" level=info msg="received container exit event container_id:\"562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86\" id:\"562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86\" pid:4692 exited_at:{seconds:1769194359 nanos:187299781}" Jan 23 18:52:39.195261 containerd[1626]: time="2026-01-23T18:52:39.194550894Z" level=info msg="StartContainer for \"562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86\" returns successfully" Jan 23 18:52:39.217661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-562d9154478cbc91c1fcca780e3d645a0eb0af803da9d0c4424aebc9eb2b8f86-rootfs.mount: Deactivated successfully. Jan 23 18:52:40.061904 containerd[1626]: time="2026-01-23T18:52:40.061815831Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:52:40.083735 containerd[1626]: time="2026-01-23T18:52:40.083667547Z" level=info msg="Container cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:40.095355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058568700.mount: Deactivated successfully. Jan 23 18:52:40.102129 containerd[1626]: time="2026-01-23T18:52:40.102059703Z" level=info msg="CreateContainer within sandbox \"e2b8a6fffc53e4895b365b93d2c77d73163b5bb898abf2378f338ed2f74d28f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a\"" Jan 23 18:52:40.104577 containerd[1626]: time="2026-01-23T18:52:40.103317134Z" level=info msg="StartContainer for \"cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a\"" Jan 23 18:52:40.105014 containerd[1626]: time="2026-01-23T18:52:40.104914220Z" level=info msg="connecting to shim cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a" address="unix:///run/containerd/s/9fd2750a41d99612dd79c7ea552aea343743f8cae1ce2fd964dcabdb66a7d86b" protocol=ttrpc version=3 Jan 23 18:52:40.148528 systemd[1]: Started cri-containerd-cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a.scope - libcontainer container cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a. Jan 23 18:52:40.222083 containerd[1626]: time="2026-01-23T18:52:40.222004070Z" level=info msg="StartContainer for \"cd2f159413a17e1e602f39165a2745672585e49dc9530e58de8c7f1869b4c18a\" returns successfully" Jan 23 18:52:40.692336 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jan 23 18:52:41.046416 kubelet[2784]: I0123 18:52:41.045421 2784 setters.go:543] "Node became not ready" node="ci-4459-2-3-7-efa5270b02" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:52:41Z","lastTransitionTime":"2026-01-23T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 18:52:44.364408 systemd-networkd[1482]: lxc_health: Link UP Jan 23 18:52:44.373016 systemd-networkd[1482]: lxc_health: Gained carrier Jan 23 18:52:44.553131 kubelet[2784]: I0123 18:52:44.553083 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x6wfx" podStartSLOduration=8.553071667 podStartE2EDuration="8.553071667s" podCreationTimestamp="2026-01-23 18:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:52:41.097752458 +0000 UTC m=+153.705706983" watchObservedRunningTime="2026-01-23 18:52:44.553071667 +0000 UTC m=+157.161026162" Jan 23 18:52:46.330850 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jan 23 18:52:50.257230 sshd[4679]: Connection closed by 20.161.92.111 port 52774 Jan 23 18:52:50.258426 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:50.262750 systemd[1]: sshd@22-77.42.79.158:22-20.161.92.111:52774.service: Deactivated successfully. Jan 23 18:52:50.264915 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:52:50.266144 systemd-logind[1605]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:52:50.268044 systemd-logind[1605]: Removed session 23. Jan 23 18:53:07.514581 containerd[1626]: time="2026-01-23T18:53:07.514488948Z" level=info msg="StopPodSandbox for \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\"" Jan 23 18:53:07.515224 containerd[1626]: time="2026-01-23T18:53:07.514701294Z" level=info msg="TearDown network for sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" successfully" Jan 23 18:53:07.515224 containerd[1626]: time="2026-01-23T18:53:07.514721273Z" level=info msg="StopPodSandbox for \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" returns successfully" Jan 23 18:53:07.516664 containerd[1626]: time="2026-01-23T18:53:07.516062432Z" level=info msg="RemovePodSandbox for \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\"" Jan 23 18:53:07.516664 containerd[1626]: time="2026-01-23T18:53:07.516113941Z" level=info msg="Forcibly stopping sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\"" Jan 23 18:53:07.516664 containerd[1626]: time="2026-01-23T18:53:07.516431995Z" level=info msg="TearDown network for sandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" successfully" Jan 23 18:53:07.522367 containerd[1626]: time="2026-01-23T18:53:07.521377656Z" level=info msg="Ensure that sandbox d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217 in task-service has been cleanup successfully" Jan 23 18:53:07.527681 containerd[1626]: time="2026-01-23T18:53:07.527616053Z" level=info msg="RemovePodSandbox \"d9c4c4ed8fbffdce6628b8a97844db1e3437b98637faba54174658739832a217\" returns successfully" Jan 23 18:53:07.528441 containerd[1626]: time="2026-01-23T18:53:07.528364111Z" level=info msg="StopPodSandbox for \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\"" Jan 23 18:53:07.528551 containerd[1626]: time="2026-01-23T18:53:07.528525298Z" level=info msg="TearDown network for sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" successfully" Jan 23 18:53:07.528551 containerd[1626]: time="2026-01-23T18:53:07.528543268Z" level=info msg="StopPodSandbox for \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" returns successfully" Jan 23 18:53:07.529080 containerd[1626]: time="2026-01-23T18:53:07.529044740Z" level=info msg="RemovePodSandbox for \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\"" Jan 23 18:53:07.529275 containerd[1626]: time="2026-01-23T18:53:07.529223747Z" level=info msg="Forcibly stopping sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\"" Jan 23 18:53:07.529547 containerd[1626]: time="2026-01-23T18:53:07.529487013Z" level=info msg="TearDown network for sandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" successfully" Jan 23 18:53:07.531993 containerd[1626]: time="2026-01-23T18:53:07.531938403Z" level=info msg="Ensure that sandbox d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20 in task-service has been cleanup successfully" Jan 23 18:53:07.536136 containerd[1626]: time="2026-01-23T18:53:07.536076276Z" level=info msg="RemovePodSandbox \"d06f54dd9a98ad9ef479bef7cbaebdf2d3a247a9f975a9385a78b38a58c56f20\" returns successfully" Jan 23 18:53:24.030005 kubelet[2784]: E0123 18:53:24.029947 2784 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58736->10.0.0.2:2379: read: connection timed out" Jan 23 18:53:24.704369 systemd[1]: cri-containerd-3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb.scope: Deactivated successfully. Jan 23 18:53:24.706518 systemd[1]: cri-containerd-3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb.scope: Consumed 3.653s CPU time, 56.9M memory peak. Jan 23 18:53:24.709137 containerd[1626]: time="2026-01-23T18:53:24.708479641Z" level=info msg="received container exit event container_id:\"3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb\" id:\"3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb\" pid:2641 exit_status:1 exited_at:{seconds:1769194404 nanos:707387688}" Jan 23 18:53:24.753654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb-rootfs.mount: Deactivated successfully. Jan 23 18:53:25.182001 kubelet[2784]: I0123 18:53:25.181617 2784 scope.go:117] "RemoveContainer" containerID="3ebd6505d28fb51f56c6f97686202ad46882aa5a21b07cca5e1fa0b85b4c2ecb" Jan 23 18:53:25.185184 containerd[1626]: time="2026-01-23T18:53:25.185109781Z" level=info msg="CreateContainer within sandbox \"4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:53:25.196288 containerd[1626]: time="2026-01-23T18:53:25.196137498Z" level=info msg="Container e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:25.208334 containerd[1626]: time="2026-01-23T18:53:25.207131926Z" level=info msg="CreateContainer within sandbox \"4431292e11560f3ab14a8fcd9a7282fcc8caeae46d9fc9ca698cb54f3b892413\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd\"" Jan 23 18:53:25.208007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904949824.mount: Deactivated successfully. Jan 23 18:53:25.209805 containerd[1626]: time="2026-01-23T18:53:25.209750492Z" level=info msg="StartContainer for \"e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd\"" Jan 23 18:53:25.212525 containerd[1626]: time="2026-01-23T18:53:25.212469828Z" level=info msg="connecting to shim e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd" address="unix:///run/containerd/s/594d35ef6fe5d31d4559ef20f207b3e78d56af895c6370e900540c25f11ca20d" protocol=ttrpc version=3 Jan 23 18:53:25.251486 systemd[1]: Started cri-containerd-e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd.scope - libcontainer container e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd. Jan 23 18:53:25.346704 containerd[1626]: time="2026-01-23T18:53:25.346624166Z" level=info msg="StartContainer for \"e49f619a668b90a96d19d8ac04ab69eff885c8a8adb66dd15896ee190cea27cd\" returns successfully" Jan 23 18:53:28.613853 systemd[1]: cri-containerd-d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313.scope: Deactivated successfully. Jan 23 18:53:28.614468 systemd[1]: cri-containerd-d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313.scope: Consumed 2.285s CPU time, 23.2M memory peak. Jan 23 18:53:28.620019 containerd[1626]: time="2026-01-23T18:53:28.619910393Z" level=info msg="received container exit event container_id:\"d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313\" id:\"d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313\" pid:2613 exit_status:1 exited_at:{seconds:1769194408 nanos:618007571}" Jan 23 18:53:28.659854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313-rootfs.mount: Deactivated successfully. Jan 23 18:53:28.892008 kubelet[2784]: E0123 18:53:28.891627 2784 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58352->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-3-7-efa5270b02.188d70ee89a38816 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-3-7-efa5270b02,UID:826fad8ef188df654393bdaedb58829c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-7-efa5270b02,},FirstTimestamp:2026-01-23 18:53:18.440196118 +0000 UTC m=+191.048150643,LastTimestamp:2026-01-23 18:53:18.440196118 +0000 UTC m=+191.048150643,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-7-efa5270b02,}" Jan 23 18:53:29.196891 kubelet[2784]: I0123 18:53:29.196706 2784 scope.go:117] "RemoveContainer" containerID="d9c428d9d0f2b0954b728759857feb49ba527c0b2a6997481a497d8465230313" Jan 23 18:53:29.199772 containerd[1626]: time="2026-01-23T18:53:29.199700433Z" level=info msg="CreateContainer within sandbox \"ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:53:29.213425 containerd[1626]: time="2026-01-23T18:53:29.213375262Z" level=info msg="Container 1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:29.226615 containerd[1626]: time="2026-01-23T18:53:29.226541580Z" level=info msg="CreateContainer within sandbox \"ba2c86f603225892481b5c244c1e9fd4365dd0d394fe7c65addddbe2dd062fe7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab\"" Jan 23 18:53:29.227214 containerd[1626]: time="2026-01-23T18:53:29.227170991Z" level=info msg="StartContainer for \"1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab\"" Jan 23 18:53:29.228965 containerd[1626]: time="2026-01-23T18:53:29.228904043Z" level=info msg="connecting to shim 1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab" address="unix:///run/containerd/s/5535c248588a96f7a54b4bdc01ed573964fc7a943c5c7ca3fd6228112bd0bacc" protocol=ttrpc version=3 Jan 23 18:53:29.273491 systemd[1]: Started cri-containerd-1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab.scope - libcontainer container 1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab. Jan 23 18:53:29.370586 containerd[1626]: time="2026-01-23T18:53:29.370508011Z" level=info msg="StartContainer for \"1a7de582050576562f47833292242e33330d170aa0aa7a2c9a3c8e0c4d26e4ab\" returns successfully" Jan 23 18:53:34.031576 kubelet[2784]: E0123 18:53:34.031477 2784 controller.go:195] "Failed to update lease" err="Put \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:53:44.032395 kubelet[2784]: E0123 18:53:44.032204 2784 controller.go:195] "Failed to update lease" err="Put \"https://77.42.79.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-7-efa5270b02?timeout=10s\": context deadline exceeded"