Jan 24 00:38:30.958015 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:38:30.958047 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:38:30.958065 kernel: BIOS-provided physical RAM map: Jan 24 00:38:30.958075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:38:30.958083 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 24 00:38:30.958092 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 24 00:38:30.958101 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 24 00:38:30.958110 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 24 00:38:30.958119 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 24 00:38:30.958128 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 24 00:38:30.958137 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:38:30.958150 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:38:30.958159 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 24 00:38:30.958168 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 24 00:38:30.958178 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:38:30.958188 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:38:30.958202 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:38:30.958211 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 24 00:38:30.958220 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:38:30.958229 kernel: NX (Execute Disable) protection: active Jan 24 00:38:30.958239 kernel: APIC: Static calls initialized Jan 24 00:38:30.958248 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 24 00:38:30.958257 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Jan 24 00:38:30.958267 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:38:30.958276 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:38:30.958286 kernel: SMBIOS 3.0.0 present. Jan 24 00:38:30.958296 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 24 00:38:30.958305 kernel: Hypervisor detected: KVM Jan 24 00:38:30.958319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:38:30.958328 kernel: kvm-clock: using sched offset of 12908092178 cycles Jan 24 00:38:30.958338 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:38:30.958348 kernel: tsc: Detected 2399.998 MHz processor Jan 24 00:38:30.958358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:38:30.958368 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:38:30.958378 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 24 00:38:30.958388 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:38:30.958397 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:38:30.958411 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 24 00:38:30.958421 kernel: Using GB pages for direct mapping Jan 24 00:38:30.958430 kernel: Secure boot disabled Jan 24 00:38:30.958446 kernel: ACPI: Early table checksum verification disabled Jan 24 00:38:30.958456 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:38:30.958466 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:38:30.958476 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958490 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958500 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:38:30.958510 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958534 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958544 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958554 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:38:30.958564 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:38:30.958578 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 24 00:38:30.958588 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 24 00:38:30.958598 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:38:30.958608 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 24 00:38:30.958618 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 24 00:38:30.958628 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 24 00:38:30.958637 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 24 00:38:30.958647 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 24 00:38:30.958657 kernel: No NUMA configuration found Jan 24 00:38:30.958672 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 24 00:38:30.958682 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Jan 24 00:38:30.958692 kernel: Zone ranges: Jan 24 00:38:30.958702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:38:30.958712 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:38:30.958722 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:38:30.958732 kernel: Movable zone start for each node Jan 24 00:38:30.958741 kernel: Early memory node ranges Jan 24 00:38:30.958751 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:38:30.958761 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 24 00:38:30.958775 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 24 00:38:30.958785 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 24 00:38:30.958795 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:38:30.958805 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 24 00:38:30.958815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:38:30.958869 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:38:30.958879 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:38:30.958889 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:38:30.958899 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 24 00:38:30.958914 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 24 00:38:30.958923 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:38:30.958933 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:38:30.958943 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:38:30.958953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:38:30.958963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:38:30.958973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:38:30.958982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:38:30.958992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:38:30.959007 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:38:30.959017 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:38:30.959026 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:38:30.959036 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:38:30.959046 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:38:30.959056 kernel: Booting paravirtualized kernel on KVM Jan 24 00:38:30.959066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:38:30.959076 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:38:30.959086 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:38:30.959101 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:38:30.959111 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:38:30.959120 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 24 00:38:30.959132 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:38:30.959142 kernel: random: crng init done Jan 24 00:38:30.959152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:38:30.959162 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:38:30.959172 kernel: Fallback order for Node 0: 0 Jan 24 00:38:30.959182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 24 00:38:30.959196 kernel: Policy zone: Normal Jan 24 00:38:30.959206 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:38:30.959216 kernel: software IO TLB: area num 2. Jan 24 00:38:30.959226 kernel: Memory: 3819396K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 271568K reserved, 0K cma-reserved) Jan 24 00:38:30.959236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:38:30.959246 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:38:30.959256 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:38:30.959266 kernel: Dynamic Preempt: voluntary Jan 24 00:38:30.959276 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:38:30.959291 kernel: rcu: RCU event tracing is enabled. Jan 24 00:38:30.959302 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:38:30.959312 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:38:30.959335 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:38:30.959350 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:38:30.959360 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:38:30.959370 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:38:30.959381 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:38:30.959391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:38:30.959401 kernel: Console: colour dummy device 80x25 Jan 24 00:38:30.959411 kernel: printk: console [tty0] enabled Jan 24 00:38:30.959422 kernel: printk: console [ttyS0] enabled Jan 24 00:38:30.959436 kernel: ACPI: Core revision 20230628 Jan 24 00:38:30.959447 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:38:30.959457 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:38:30.959468 kernel: x2apic enabled Jan 24 00:38:30.959478 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:38:30.959493 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:38:30.959503 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:38:30.959522 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 24 00:38:30.959533 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:38:30.959543 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:38:30.959554 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:38:30.959564 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:38:30.959575 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:38:30.959585 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:38:30.959601 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:38:30.959611 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:38:30.959621 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 24 00:38:30.959632 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:38:30.959642 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:38:30.959653 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:38:30.959663 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:38:30.959673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:38:30.959688 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:38:30.959698 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:38:30.959709 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:38:30.959719 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:38:30.959729 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:38:30.959740 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:38:30.959750 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:38:30.959760 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:38:30.959771 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:38:30.959785 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:38:30.959796 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:38:30.959806 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:38:30.959816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:38:30.964940 kernel: landlock: Up and running. Jan 24 00:38:30.964959 kernel: SELinux: Initializing. Jan 24 00:38:30.964971 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:38:30.964982 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:38:30.964993 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 24 00:38:30.965012 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:38:30.965022 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:38:30.965033 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:38:30.965043 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:38:30.965054 kernel: ... version: 0 Jan 24 00:38:30.965064 kernel: ... bit width: 48 Jan 24 00:38:30.965074 kernel: ... generic registers: 6 Jan 24 00:38:30.965085 kernel: ... value mask: 0000ffffffffffff Jan 24 00:38:30.965095 kernel: ... max period: 00007fffffffffff Jan 24 00:38:30.965110 kernel: ... fixed-purpose events: 0 Jan 24 00:38:30.965121 kernel: ... event mask: 000000000000003f Jan 24 00:38:30.965132 kernel: signal: max sigframe size: 3376 Jan 24 00:38:30.965142 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:38:30.965153 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:38:30.965164 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:38:30.965174 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:38:30.965185 kernel: .... node #0, CPUs: #1 Jan 24 00:38:30.965195 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:38:30.965210 kernel: smpboot: Max logical packages: 1 Jan 24 00:38:30.965221 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 24 00:38:30.965231 kernel: devtmpfs: initialized Jan 24 00:38:30.965241 kernel: x86/mm: Memory block size: 128MB Jan 24 00:38:30.965252 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:38:30.965262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:38:30.965272 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:38:30.965283 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:38:30.965293 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:38:30.965308 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:38:30.965318 kernel: audit: type=2000 audit(1769215109.454:1): state=initialized audit_enabled=0 res=1 Jan 24 00:38:30.965329 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:38:30.965339 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:38:30.965349 kernel: cpuidle: using governor menu Jan 24 00:38:30.965360 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:38:30.965370 kernel: dca service started, version 1.12.1 Jan 24 00:38:30.965380 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 24 00:38:30.965390 kernel: PCI: Using configuration type 1 for base access Jan 24 00:38:30.965405 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:38:30.965416 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:38:30.965426 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:38:30.965436 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:38:30.965446 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:38:30.965456 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:38:30.965467 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:38:30.965477 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:38:30.965487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:38:30.965502 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:38:30.965512 kernel: ACPI: Interpreter enabled Jan 24 00:38:30.965537 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:38:30.965547 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:38:30.965557 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:38:30.965568 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:38:30.965578 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:38:30.965589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:38:30.965904 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:38:30.966121 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:38:30.966315 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:38:30.966328 kernel: PCI host bridge to bus 0000:00 Jan 24 00:38:30.966530 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:38:30.966707 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:38:30.966903 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:38:30.967083 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:38:30.967255 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:38:30.967426 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:38:30.967611 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:38:30.967849 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:38:30.968056 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 24 00:38:30.968246 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 24 00:38:30.968441 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 24 00:38:30.968642 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 24 00:38:30.968850 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:38:30.969043 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:38:30.969232 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:38:30.969432 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.969680 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 24 00:38:30.969935 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.970128 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 24 00:38:30.970327 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.970528 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 24 00:38:30.970726 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.970958 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 24 00:38:30.971186 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.971391 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 24 00:38:30.971614 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.971804 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 24 00:38:30.974074 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.974282 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 24 00:38:30.974492 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.974695 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 24 00:38:30.975055 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:38:30.975298 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 24 00:38:30.975566 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:38:30.977635 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:38:30.977924 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:38:30.978126 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 24 00:38:30.978316 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 24 00:38:30.978512 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:38:30.978727 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 24 00:38:30.979977 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:38:30.980193 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 24 00:38:30.980394 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 24 00:38:30.980605 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:38:30.980798 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:38:30.981008 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:38:30.981204 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:38:30.981412 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 00:38:30.981634 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 24 00:38:30.983847 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:38:30.984059 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:38:30.984272 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 24 00:38:30.984498 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 24 00:38:30.984716 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 24 00:38:30.984942 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:38:30.985138 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:38:30.985324 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:38:30.985544 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 24 00:38:30.985743 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 24 00:38:30.986006 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:38:30.986195 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:38:30.986401 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 00:38:30.986622 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 24 00:38:30.987881 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 24 00:38:30.988091 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:38:30.988281 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:38:30.988469 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:38:30.988693 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 24 00:38:30.988934 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 24 00:38:30.989140 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 24 00:38:30.989327 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:38:30.989527 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:38:30.989713 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:38:30.989727 kernel: acpiphp: Slot [0] registered Jan 24 00:38:30.989956 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:38:30.990153 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 24 00:38:30.990350 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 24 00:38:30.990588 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:38:30.990778 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:38:30.991237 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:38:30.991430 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:38:30.991443 kernel: acpiphp: Slot [0-2] registered Jan 24 00:38:30.991653 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:38:30.991868 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:38:30.992058 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:38:30.992077 kernel: acpiphp: Slot [0-3] registered Jan 24 00:38:30.992264 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:38:30.992451 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:38:30.992652 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:38:30.992665 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:38:30.992676 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:38:30.992686 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:38:30.992696 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:38:30.992706 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:38:30.992722 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:38:30.992733 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:38:30.992743 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:38:30.992753 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:38:30.992763 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:38:30.992775 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:38:30.992786 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:38:30.992797 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:38:30.992807 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:38:30.992851 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:38:30.992862 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:38:30.992872 kernel: iommu: Default domain type: Translated Jan 24 00:38:30.992883 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:38:30.992893 kernel: efivars: Registered efivars operations Jan 24 00:38:30.992903 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:38:30.992914 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:38:30.992925 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 24 00:38:30.992940 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:38:30.992950 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 24 00:38:30.992961 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 24 00:38:30.993153 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:38:30.993341 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:38:30.993542 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:38:30.993555 kernel: vgaarb: loaded Jan 24 00:38:30.993566 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:38:30.993577 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:38:30.993593 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:38:30.993604 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:38:30.993615 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:38:30.993625 kernel: pnp: PnP ACPI init Jan 24 00:38:30.993851 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:38:30.993867 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:38:30.993878 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:38:30.993888 kernel: NET: Registered PF_INET protocol family Jan 24 00:38:30.993928 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:38:30.993944 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:38:30.993955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:38:30.993966 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:38:30.993977 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:38:30.993987 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:38:30.993998 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:38:30.994009 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:38:30.994020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:38:30.994035 kernel: NET: Registered PF_XDP protocol family Jan 24 00:38:30.994238 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:38:30.994437 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:38:30.994640 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:38:30.994962 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:38:30.995159 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:38:30.995347 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 00:38:30.995560 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 00:38:30.995756 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 00:38:30.995979 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 24 00:38:30.996169 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:38:30.996363 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:38:30.996564 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:38:30.996752 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:38:30.996962 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:38:30.997151 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:38:30.997338 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:38:30.997539 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:38:30.997728 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:38:30.997933 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:38:30.998128 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:38:30.998324 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:38:30.998530 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:38:30.998721 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:38:30.998926 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:38:30.999114 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:38:30.999312 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 24 00:38:30.999499 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:38:30.999703 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 24 00:38:31.000262 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:38:31.000464 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:38:31.000670 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:38:31.001585 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 24 00:38:31.001781 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:38:31.001990 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:38:31.002180 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:38:31.002367 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 24 00:38:31.002579 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:38:31.002765 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:38:31.002968 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:38:31.003150 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:38:31.003327 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:38:31.003506 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:38:31.003689 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:38:31.003905 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:38:31.004107 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 24 00:38:31.004291 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:38:31.004483 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 24 00:38:31.004694 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 24 00:38:31.004912 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:38:31.005104 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:38:31.005297 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 24 00:38:31.005481 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:38:31.005689 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 24 00:38:31.005967 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:38:31.006160 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 24 00:38:31.006345 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:38:31.006547 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:38:31.006736 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 24 00:38:31.006953 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:38:31.007137 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:38:31.007333 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 24 00:38:31.007527 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:38:31.007711 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:38:31.007725 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:38:31.007736 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:38:31.007747 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:38:31.007758 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 24 00:38:31.007769 kernel: Initialise system trusted keyrings Jan 24 00:38:31.007786 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:38:31.007796 kernel: Key type asymmetric registered Jan 24 00:38:31.007807 kernel: Asymmetric key parser 'x509' registered Jan 24 00:38:31.007818 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:38:31.007845 kernel: io scheduler mq-deadline registered Jan 24 00:38:31.007856 kernel: io scheduler kyber registered Jan 24 00:38:31.007867 kernel: io scheduler bfq registered Jan 24 00:38:31.008062 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:38:31.008260 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:38:31.008455 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:38:31.008634 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:38:31.008729 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:38:31.008833 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:38:31.008929 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:38:31.009024 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:38:31.009119 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:38:31.009213 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:38:31.009312 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:38:31.009406 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:38:31.009501 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:38:31.009604 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:38:31.009699 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:38:31.009794 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:38:31.009801 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:38:31.009911 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:38:31.010008 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:38:31.010015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:38:31.010021 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:38:31.010027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:38:31.010032 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:38:31.010038 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:38:31.010043 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:38:31.010049 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:38:31.010149 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:38:31.010159 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:38:31.010249 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:38:31.010339 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:38:30 UTC (1769215110) Jan 24 00:38:31.010428 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:38:31.010435 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:38:31.010442 kernel: efifb: probing for efifb Jan 24 00:38:31.010447 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 24 00:38:31.010453 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:38:31.010461 kernel: efifb: scrolling: redraw Jan 24 00:38:31.010469 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:38:31.010474 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:38:31.010480 kernel: fb0: EFI VGA frame buffer device Jan 24 00:38:31.010485 kernel: pstore: Using crash dump compression: deflate Jan 24 00:38:31.010491 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:38:31.010496 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:38:31.010502 kernel: Segment Routing with IPv6 Jan 24 00:38:31.010507 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:38:31.010522 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:38:31.010528 kernel: Key type dns_resolver registered Jan 24 00:38:31.010533 kernel: IPI shorthand broadcast: enabled Jan 24 00:38:31.010539 kernel: sched_clock: Marking stable (1299005137, 189493131)->(1529147037, -40648769) Jan 24 00:38:31.010544 kernel: registered taskstats version 1 Jan 24 00:38:31.010550 kernel: Loading compiled-in X.509 certificates Jan 24 00:38:31.010555 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:38:31.010561 kernel: Key type .fscrypt registered Jan 24 00:38:31.010566 kernel: Key type fscrypt-provisioning registered Jan 24 00:38:31.010575 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:38:31.010580 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:38:31.010586 kernel: ima: No architecture policies found Jan 24 00:38:31.010591 kernel: clk: Disabling unused clocks Jan 24 00:38:31.010597 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:38:31.010602 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:38:31.010608 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:38:31.010613 kernel: Run /init as init process Jan 24 00:38:31.010619 kernel: with arguments: Jan 24 00:38:31.010627 kernel: /init Jan 24 00:38:31.010633 kernel: with environment: Jan 24 00:38:31.010638 kernel: HOME=/ Jan 24 00:38:31.010643 kernel: TERM=linux Jan 24 00:38:31.010651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:38:31.010658 systemd[1]: Detected virtualization kvm. Jan 24 00:38:31.010664 systemd[1]: Detected architecture x86-64. Jan 24 00:38:31.010672 systemd[1]: Running in initrd. Jan 24 00:38:31.010678 systemd[1]: No hostname configured, using default hostname. Jan 24 00:38:31.010684 systemd[1]: Hostname set to . Jan 24 00:38:31.010690 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:38:31.010695 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:38:31.010701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:38:31.010707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:38:31.010713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:38:31.010721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:38:31.010727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:38:31.010733 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:38:31.010740 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:38:31.010746 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:38:31.010752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:38:31.010758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:38:31.010766 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:38:31.010772 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:38:31.010777 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:38:31.010783 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:38:31.010789 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:38:31.010795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:38:31.010801 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:38:31.010806 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:38:31.010814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:38:31.010832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:38:31.010837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:38:31.010846 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:38:31.010851 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:38:31.010857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:38:31.010863 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:38:31.010869 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:38:31.010875 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:38:31.010883 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:38:31.010889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:31.010894 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:38:31.010900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:38:31.010906 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:38:31.010912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:38:31.010920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:38:31.010943 systemd-journald[188]: Collecting audit messages is disabled. Jan 24 00:38:31.010961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:38:31.010967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:38:31.010973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:38:31.010978 kernel: Bridge firewalling registered Jan 24 00:38:31.010988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:38:31.010994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:38:31.011000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:31.011006 systemd-journald[188]: Journal started Jan 24 00:38:31.011021 systemd-journald[188]: Runtime Journal (/run/log/journal/1f857a6926db46c6b905ed44e69e64d5) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:38:31.018499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:38:30.948073 systemd-modules-load[190]: Inserted module 'overlay' Jan 24 00:38:31.021237 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:38:30.987669 systemd-modules-load[190]: Inserted module 'br_netfilter' Jan 24 00:38:31.022848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:38:31.032075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:38:31.032693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:38:31.034928 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:38:31.039900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:38:31.043001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:38:31.046122 dracut-cmdline[221]: dracut-dracut-053 Jan 24 00:38:31.048722 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:38:31.074369 systemd-resolved[226]: Positive Trust Anchors: Jan 24 00:38:31.074383 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:38:31.074406 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:38:31.078335 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 24 00:38:31.079280 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:38:31.080285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:38:31.113857 kernel: SCSI subsystem initialized Jan 24 00:38:31.121843 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:38:31.138871 kernel: iscsi: registered transport (tcp) Jan 24 00:38:31.158131 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:38:31.158200 kernel: QLogic iSCSI HBA Driver Jan 24 00:38:31.192796 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:38:31.199945 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:38:31.242679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:38:31.242755 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:38:31.242775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:38:31.303913 kernel: raid6: avx512x4 gen() 25083 MB/s Jan 24 00:38:31.321915 kernel: raid6: avx512x2 gen() 34649 MB/s Jan 24 00:38:31.339916 kernel: raid6: avx512x1 gen() 47687 MB/s Jan 24 00:38:31.357872 kernel: raid6: avx2x4 gen() 53069 MB/s Jan 24 00:38:31.375872 kernel: raid6: avx2x2 gen() 56767 MB/s Jan 24 00:38:31.394645 kernel: raid6: avx2x1 gen() 46250 MB/s Jan 24 00:38:31.394714 kernel: raid6: using algorithm avx2x2 gen() 56767 MB/s Jan 24 00:38:31.413655 kernel: raid6: .... xor() 37239 MB/s, rmw enabled Jan 24 00:38:31.413729 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:38:31.429867 kernel: xor: automatically using best checksumming function avx Jan 24 00:38:31.527861 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:38:31.538666 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:38:31.542962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:38:31.555189 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 24 00:38:31.559172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:38:31.568974 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:38:31.578788 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 24 00:38:31.603643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:38:31.609938 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:38:31.680784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:38:31.691098 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:38:31.705779 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:38:31.707304 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:38:31.708423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:38:31.709530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:38:31.716006 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:38:31.727483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:38:31.763843 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:38:31.773388 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:38:31.780259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:38:31.780356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:38:31.781855 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:38:31.782578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:38:31.785626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:31.786314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:31.791840 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:38:31.798023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:31.807607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:31.814120 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:38:31.828514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:38:31.856647 kernel: libata version 3.00 loaded. Jan 24 00:38:31.856667 kernel: ACPI: bus type USB registered Jan 24 00:38:31.856675 kernel: usbcore: registered new interface driver usbfs Jan 24 00:38:31.856688 kernel: usbcore: registered new interface driver hub Jan 24 00:38:31.856695 kernel: usbcore: registered new device driver usb Jan 24 00:38:31.856703 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:38:31.856711 kernel: AES CTR mode by8 optimization enabled Jan 24 00:38:31.856718 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:38:31.856890 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:38:31.856899 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:38:31.857013 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:38:31.828586 systemd[1]: dracut-cmdline-ask.service: Unit process 500 (dracut-cmdline-) remains running after unit stopped. Jan 24 00:38:31.828691 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:38:31.854454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:38:31.854562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:31.858217 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:31.865887 kernel: scsi host1: ahci Jan 24 00:38:31.868882 kernel: scsi host2: ahci Jan 24 00:38:31.869034 kernel: scsi host3: ahci Jan 24 00:38:31.868479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:31.916630 kernel: scsi host4: ahci Jan 24 00:38:31.916799 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:38:31.916957 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 24 00:38:31.917072 kernel: scsi host5: ahci Jan 24 00:38:31.917191 kernel: scsi host6: ahci Jan 24 00:38:31.917302 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 00:38:31.917420 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:38:31.917538 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Jan 24 00:38:31.917546 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Jan 24 00:38:31.917554 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Jan 24 00:38:31.917562 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Jan 24 00:38:31.917569 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Jan 24 00:38:31.917576 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Jan 24 00:38:31.917584 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 24 00:38:31.917696 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 24 00:38:31.918929 kernel: hub 1-0:1.0: USB hub found Jan 24 00:38:31.919072 kernel: hub 1-0:1.0: 4 ports detected Jan 24 00:38:31.919190 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 00:38:31.919317 kernel: hub 2-0:1.0: USB hub found Jan 24 00:38:31.919441 kernel: hub 2-0:1.0: 4 ports detected Jan 24 00:38:31.917938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:31.922955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:38:31.934718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:38:32.136141 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 00:38:32.205860 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:38:32.205964 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:38:32.215347 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:38:32.215429 kernel: ata1.00: applying bridge limits Jan 24 00:38:32.215862 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:38:32.220908 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:38:32.229897 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:38:32.229969 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:38:32.233886 kernel: ata1.00: configured for UDMA/100 Jan 24 00:38:32.238385 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:38:32.317876 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:38:32.317940 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:38:32.327633 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 24 00:38:32.328163 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:38:32.328507 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:38:32.328878 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:38:32.332869 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:38:32.339968 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:38:32.340005 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:38:32.340033 kernel: GPT:17805311 != 160006143 Jan 24 00:38:32.340047 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:38:32.340062 kernel: GPT:17805311 != 160006143 Jan 24 00:38:32.340075 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:38:32.340089 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:38:32.344415 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:38:32.371376 kernel: usbcore: registered new interface driver usbhid Jan 24 00:38:32.371432 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:38:32.371856 kernel: usbhid: USB HID core driver Jan 24 00:38:32.391510 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 24 00:38:32.391605 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 24 00:38:32.448903 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Jan 24 00:38:32.455915 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (452) Jan 24 00:38:32.459642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:38:32.482424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:38:32.492007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:38:32.499093 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:38:32.499972 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:38:32.510937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:38:32.518512 disk-uuid[592]: Primary Header is updated. Jan 24 00:38:32.518512 disk-uuid[592]: Secondary Entries is updated. Jan 24 00:38:32.518512 disk-uuid[592]: Secondary Header is updated. Jan 24 00:38:32.523853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:38:32.531892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:38:32.540869 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:38:33.547012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:38:33.547912 disk-uuid[593]: The operation has completed successfully. Jan 24 00:38:33.641291 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:38:33.641549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:38:33.668092 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:38:33.676137 sh[612]: Success Jan 24 00:38:33.701909 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:38:33.791392 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:38:33.806011 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:38:33.810394 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:38:33.854132 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:38:33.854207 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:38:33.859371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:38:33.865186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:38:33.869572 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:38:33.884911 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:38:33.888792 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:38:33.891031 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:38:33.900147 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:38:33.904114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:38:33.926309 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:38:33.926372 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:38:33.931289 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:38:33.943876 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:38:33.943935 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:38:33.969027 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:38:33.973713 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:38:33.982949 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:38:33.991091 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:38:34.120925 ignition[706]: Ignition 2.19.0 Jan 24 00:38:34.120948 ignition[706]: Stage: fetch-offline Jan 24 00:38:34.121003 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:34.125031 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:38:34.121020 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:34.121195 ignition[706]: parsed url from cmdline: "" Jan 24 00:38:34.121205 ignition[706]: no config URL provided Jan 24 00:38:34.121219 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:38:34.121244 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:38:34.121254 ignition[706]: failed to fetch config: resource requires networking Jan 24 00:38:34.121850 ignition[706]: Ignition finished successfully Jan 24 00:38:34.161560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:38:34.167993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:38:34.234128 systemd-networkd[798]: lo: Link UP Jan 24 00:38:34.234144 systemd-networkd[798]: lo: Gained carrier Jan 24 00:38:34.239182 systemd-networkd[798]: Enumeration completed Jan 24 00:38:34.240114 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:34.240122 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:38:34.241014 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:38:34.242067 systemd[1]: Reached target network.target - Network. Jan 24 00:38:34.244017 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:34.244030 systemd-networkd[798]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:38:34.245752 systemd-networkd[798]: eth0: Link UP Jan 24 00:38:34.245765 systemd-networkd[798]: eth0: Gained carrier Jan 24 00:38:34.245788 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:34.252052 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:38:34.253135 systemd-networkd[798]: eth1: Link UP Jan 24 00:38:34.253148 systemd-networkd[798]: eth1: Gained carrier Jan 24 00:38:34.253170 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:34.275944 systemd-networkd[798]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:38:34.296950 ignition[800]: Ignition 2.19.0 Jan 24 00:38:34.296971 ignition[800]: Stage: fetch Jan 24 00:38:34.297232 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:34.297255 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:34.297416 ignition[800]: parsed url from cmdline: "" Jan 24 00:38:34.297425 ignition[800]: no config URL provided Jan 24 00:38:34.297436 ignition[800]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:38:34.297454 ignition[800]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:38:34.297484 ignition[800]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 24 00:38:34.297812 ignition[800]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:38:34.311954 systemd-networkd[798]: eth0: DHCPv4 address 46.62.237.128/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:38:34.498189 ignition[800]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 24 00:38:34.506756 ignition[800]: GET result: OK Jan 24 00:38:34.506912 ignition[800]: parsing config with SHA512: 9e9502264473cdf0c575ee473411a0f78624bf3559ad9ba1f37a66f53ab1beb36579679165b9e69878df7481eab67c46a81eb8d06c94232d916828f26e28a4df Jan 24 00:38:34.513088 unknown[800]: fetched base config from "system" Jan 24 00:38:34.513913 ignition[800]: fetch: fetch complete Jan 24 00:38:34.513117 unknown[800]: fetched base config from "system" Jan 24 00:38:34.513924 ignition[800]: fetch: fetch passed Jan 24 00:38:34.513130 unknown[800]: fetched user config from "hetzner" Jan 24 00:38:34.514000 ignition[800]: Ignition finished successfully Jan 24 00:38:34.518940 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:38:34.528081 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:38:34.558943 ignition[808]: Ignition 2.19.0 Jan 24 00:38:34.558963 ignition[808]: Stage: kargs Jan 24 00:38:34.559229 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:34.559251 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:34.563471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:38:34.560725 ignition[808]: kargs: kargs passed Jan 24 00:38:34.560803 ignition[808]: Ignition finished successfully Jan 24 00:38:34.574164 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:38:34.602347 ignition[814]: Ignition 2.19.0 Jan 24 00:38:34.602368 ignition[814]: Stage: disks Jan 24 00:38:34.602667 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:34.602690 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:34.607344 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:38:34.603993 ignition[814]: disks: disks passed Jan 24 00:38:34.604075 ignition[814]: Ignition finished successfully Jan 24 00:38:34.609936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:38:34.611609 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:38:34.613651 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:38:34.615392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:38:34.617256 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:38:34.626112 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:38:34.658500 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:38:34.664386 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:38:34.672011 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:38:34.803080 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:38:34.803173 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:38:34.804119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:38:34.812892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:38:34.815892 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:38:34.818989 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:38:34.819696 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:38:34.820416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:38:34.830444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:38:34.839867 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (831) Jan 24 00:38:34.841661 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:38:34.857045 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:38:34.857061 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:38:34.857070 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:38:34.865387 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:38:34.865418 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:38:34.868981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:38:34.905843 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:38:34.912999 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:38:34.920418 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:38:34.923484 coreos-metadata[833]: Jan 24 00:38:34.923 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 24 00:38:34.924358 coreos-metadata[833]: Jan 24 00:38:34.924 INFO Fetch successful Jan 24 00:38:34.925685 coreos-metadata[833]: Jan 24 00:38:34.924 INFO wrote hostname ci-4081-3-6-n-3213f37a88 to /sysroot/etc/hostname Jan 24 00:38:34.927609 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:38:34.929979 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:38:35.091761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:38:35.099971 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:38:35.109103 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:38:35.124634 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:38:35.131443 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:38:35.160222 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:38:35.174925 ignition[950]: INFO : Ignition 2.19.0 Jan 24 00:38:35.174925 ignition[950]: INFO : Stage: mount Jan 24 00:38:35.176723 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:35.176723 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:35.178258 ignition[950]: INFO : mount: mount passed Jan 24 00:38:35.178258 ignition[950]: INFO : Ignition finished successfully Jan 24 00:38:35.180924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:38:35.190979 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:38:35.219015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:38:35.257893 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (961) Jan 24 00:38:35.266451 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:38:35.266526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:38:35.272410 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:38:35.289900 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:38:35.289985 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:38:35.296762 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:38:35.337403 ignition[979]: INFO : Ignition 2.19.0 Jan 24 00:38:35.337403 ignition[979]: INFO : Stage: files Jan 24 00:38:35.339787 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:35.339787 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:35.339787 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:38:35.342338 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:38:35.342338 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:38:35.345640 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:38:35.346561 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:38:35.347332 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:38:35.346725 unknown[979]: wrote ssh authorized keys file for user: core Jan 24 00:38:35.350226 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:38:35.351049 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:38:35.351049 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:38:35.351049 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:38:35.561229 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:38:35.868310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:38:35.868310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:38:35.871341 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 00:38:36.128339 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 24 00:38:36.144996 systemd-networkd[798]: eth1: Gained IPv6LL Jan 24 00:38:36.209970 systemd-networkd[798]: eth0: Gained IPv6LL Jan 24 00:38:36.272790 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:38:36.273270 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:38:36.273270 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:38:36.273270 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:38:36.274248 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:38:36.277423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:38:36.277423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:38:36.277423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:38:36.590719 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 24 00:38:36.963673 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:38:36.963673 ignition[979]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:38:36.966710 ignition[979]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:38:36.966710 ignition[979]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:38:36.966710 ignition[979]: INFO : files: files passed Jan 24 00:38:36.966710 ignition[979]: INFO : Ignition finished successfully Jan 24 00:38:36.968118 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:38:36.974983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:38:36.983102 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:38:36.989424 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:38:36.990925 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:38:37.015720 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:38:37.017955 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:38:37.017955 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:38:37.019103 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:38:37.020293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:38:37.026177 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:38:37.055716 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:38:37.055979 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:38:37.057727 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:38:37.059098 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:38:37.060932 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:38:37.065003 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:38:37.102909 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:38:37.107946 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:38:37.135765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:38:37.136698 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:38:37.137583 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:38:37.138089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:38:37.138172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:38:37.140364 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:38:37.141731 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:38:37.143008 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:38:37.144244 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:38:37.145497 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:38:37.146784 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:38:37.148015 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:38:37.149266 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:38:37.150533 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:38:37.151811 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:38:37.153081 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:38:37.153157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:38:37.155290 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:38:37.155695 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:38:37.157164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:38:37.157281 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:38:37.158624 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:38:37.158728 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:38:37.160981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:38:37.161093 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:38:37.162573 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:38:37.162669 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:38:37.164026 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:38:37.164122 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:38:37.173369 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:38:37.176030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:38:37.176671 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:38:37.176842 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:38:37.180622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:38:37.180764 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:38:37.187496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:38:37.188363 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:38:37.200577 ignition[1030]: INFO : Ignition 2.19.0 Jan 24 00:38:37.200577 ignition[1030]: INFO : Stage: umount Jan 24 00:38:37.203932 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:38:37.203932 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:38:37.203932 ignition[1030]: INFO : umount: umount passed Jan 24 00:38:37.203932 ignition[1030]: INFO : Ignition finished successfully Jan 24 00:38:37.204640 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:38:37.204816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:38:37.210465 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:38:37.210620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:38:37.211992 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:38:37.212067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:38:37.214106 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:38:37.214174 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:38:37.216057 systemd[1]: Stopped target network.target - Network. Jan 24 00:38:37.217050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:38:37.217128 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:38:37.218098 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:38:37.218755 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:38:37.219894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:38:37.223503 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:38:37.227404 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:38:37.228538 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:38:37.228627 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:38:37.229472 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:38:37.229538 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:38:37.231856 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:38:37.231935 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:38:37.232786 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:38:37.232888 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:38:37.234445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:38:37.237304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:38:37.238649 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:38:37.238887 systemd-networkd[798]: eth1: DHCPv6 lease lost Jan 24 00:38:37.245924 systemd-networkd[798]: eth0: DHCPv6 lease lost Jan 24 00:38:37.247596 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:38:37.247706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:38:37.248691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:38:37.248755 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:38:37.252988 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:38:37.254856 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:38:37.254901 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:38:37.256605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:38:37.264485 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:38:37.264632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:38:37.273071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:38:37.273228 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:38:37.280325 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:38:37.280595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:38:37.282319 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:38:37.282484 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:38:37.286338 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:38:37.286405 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:38:37.287610 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:38:37.287650 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:38:37.288737 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:38:37.288789 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:38:37.290709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:38:37.290757 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:38:37.292595 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:38:37.292655 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:38:37.294510 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:38:37.294569 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:38:37.310960 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:38:37.311307 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:38:37.311354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:38:37.311719 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:38:37.311752 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:38:37.312114 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:38:37.312147 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:38:37.312492 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:38:37.312523 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:38:37.312912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:38:37.312947 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:38:37.313993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:38:37.314029 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:38:37.316263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:38:37.316303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:37.323017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:38:37.323115 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:38:37.324085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:38:37.330917 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:38:37.338022 systemd[1]: Switching root. Jan 24 00:38:37.366192 systemd-journald[188]: Journal stopped Jan 24 00:38:38.624058 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 24 00:38:38.624124 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:38:38.624135 kernel: SELinux: policy capability open_perms=1 Jan 24 00:38:38.624144 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:38:38.624156 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:38:38.624170 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:38:38.624182 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:38:38.624191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:38:38.624199 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:38:38.624207 kernel: audit: type=1403 audit(1769215117.566:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:38:38.624225 systemd[1]: Successfully loaded SELinux policy in 44.144ms. Jan 24 00:38:38.624239 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.251ms. Jan 24 00:38:38.624251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:38:38.624260 systemd[1]: Detected virtualization kvm. Jan 24 00:38:38.624269 systemd[1]: Detected architecture x86-64. Jan 24 00:38:38.624278 systemd[1]: Detected first boot. Jan 24 00:38:38.624286 systemd[1]: Hostname set to . Jan 24 00:38:38.624295 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:38:38.624303 zram_generator::config[1094]: No configuration found. Jan 24 00:38:38.624313 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:38:38.624325 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:38:38.624333 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:38:38.624343 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:38:38.624351 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:38:38.624360 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:38:38.624369 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:38:38.624378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:38:38.624386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:38:38.624395 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:38:38.624406 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:38:38.624415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:38:38.624425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:38:38.624434 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:38:38.624443 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:38:38.624452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:38:38.624460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:38:38.624469 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:38:38.624478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:38:38.624488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:38:38.624497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:38:38.624506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:38:38.624515 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:38:38.624524 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:38:38.624533 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:38:38.624544 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:38:38.624553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:38:38.624561 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:38:38.624579 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:38:38.624588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:38:38.624599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:38:38.624608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:38:38.624617 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:38:38.624626 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:38:38.624635 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:38:38.624646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:38.624655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:38:38.624664 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:38:38.624675 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:38:38.624683 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:38:38.624692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:38:38.624701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:38:38.624712 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:38:38.624721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:38:38.624730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:38:38.624739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:38:38.624747 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:38:38.624756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:38:38.624765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:38:38.624773 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:38:38.624785 kernel: fuse: init (API version 7.39) Jan 24 00:38:38.624794 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:38:38.624803 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:38:38.624811 kernel: loop: module loaded Jan 24 00:38:38.624837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:38:38.624846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:38:38.624855 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:38:38.624864 kernel: ACPI: bus type drm_connector registered Jan 24 00:38:38.624872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:38:38.624884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:38.624893 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:38:38.624901 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:38:38.624924 systemd-journald[1181]: Collecting audit messages is disabled. Jan 24 00:38:38.624944 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:38:38.624953 systemd-journald[1181]: Journal started Jan 24 00:38:38.624971 systemd-journald[1181]: Runtime Journal (/run/log/journal/1f857a6926db46c6b905ed44e69e64d5) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:38:38.627808 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:38:38.630988 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:38:38.631519 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:38:38.632032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:38:38.633058 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:38:38.633805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:38:38.634535 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:38:38.634756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:38:38.635473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:38:38.635683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:38:38.636399 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:38:38.636557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:38:38.637338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:38:38.637531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:38:38.638232 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:38:38.638427 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:38:38.639390 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:38:38.639615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:38:38.640685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:38:38.641393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:38:38.642109 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:38:38.654698 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:38:38.662929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:38:38.666919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:38:38.668902 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:38:38.674955 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:38:38.680981 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:38:38.681386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:38:38.688887 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:38:38.690219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:38:38.698952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:38:38.701050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:38:38.706412 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:38:38.710470 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:38:38.722092 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:38:38.722657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:38:38.727619 systemd-journald[1181]: Time spent on flushing to /var/log/journal/1f857a6926db46c6b905ed44e69e64d5 is 26.965ms for 1181 entries. Jan 24 00:38:38.727619 systemd-journald[1181]: System Journal (/var/log/journal/1f857a6926db46c6b905ed44e69e64d5) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:38:38.764629 systemd-journald[1181]: Received client request to flush runtime journal. Jan 24 00:38:38.760087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:38:38.767063 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:38:38.787354 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 24 00:38:38.787377 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 24 00:38:38.800080 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:38:38.807974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:38:38.833305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:38:38.843998 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:38:38.853746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:38:38.861129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:38:38.866977 udevadm[1252]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:38:38.882717 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 24 00:38:38.883097 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 24 00:38:38.889511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:38:39.086069 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:38:39.097026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:38:39.133804 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Jan 24 00:38:39.162490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:38:39.172983 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:38:39.193983 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:38:39.225531 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:38:39.247389 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:38:39.324288 systemd-networkd[1265]: lo: Link UP Jan 24 00:38:39.324553 systemd-networkd[1265]: lo: Gained carrier Jan 24 00:38:39.327533 systemd-networkd[1265]: Enumeration completed Jan 24 00:38:39.327774 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:38:39.329852 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.329898 systemd-networkd[1265]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:38:39.330665 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.330717 systemd-networkd[1265]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:38:39.331944 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:38:39.332773 systemd-networkd[1265]: eth0: Link UP Jan 24 00:38:39.332777 systemd-networkd[1265]: eth0: Gained carrier Jan 24 00:38:39.332788 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.336876 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 24 00:38:39.338782 systemd-networkd[1265]: eth1: Link UP Jan 24 00:38:39.338861 systemd-networkd[1265]: eth1: Gained carrier Jan 24 00:38:39.338896 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.343081 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.359798 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:38:39.366877 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:38:39.375021 systemd-networkd[1265]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:38:39.383688 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 24 00:38:39.384295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:39.384451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:38:39.390084 systemd-networkd[1265]: eth0: DHCPv4 address 46.62.237.128/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:38:39.391136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:38:39.394046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:38:39.397236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:38:39.398688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:38:39.398762 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:38:39.398818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:39.399207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:38:39.400071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:38:39.407846 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:38:39.416566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:38:39.416783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:38:39.418667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:38:39.425078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:38:39.425308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:38:39.428300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:38:39.434854 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1281) Jan 24 00:38:39.438859 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:38:39.446052 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:38:39.450590 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:38:39.456457 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:38:39.456473 kernel: [drm] features: -context_init Jan 24 00:38:39.478856 kernel: [drm] number of scanouts: 1 Jan 24 00:38:39.485868 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 24 00:38:39.493991 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:38:39.500169 kernel: [drm] number of cap sets: 0 Jan 24 00:38:39.500067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:38:39.519404 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 24 00:38:39.519452 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:38:39.519667 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:38:39.519800 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:38:39.520222 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:38:39.520335 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:38:39.520346 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:38:39.532865 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:38:39.533158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:39.543030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:38:39.543323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:39.563039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:38:39.617549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:38:39.655860 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:38:39.663152 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:38:39.676430 lvm[1334]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:38:39.708390 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:38:39.709090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:38:39.713012 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:38:39.720182 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:38:39.752501 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:38:39.753260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:38:39.753400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:38:39.753435 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:38:39.753532 systemd[1]: Reached target machines.target - Containers. Jan 24 00:38:39.755801 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:38:39.762996 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:38:39.765069 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:38:39.766689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:38:39.767968 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:38:39.773070 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:38:39.783995 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:38:39.797267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:38:39.819023 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:38:39.826041 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:38:39.836775 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:38:39.840015 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:38:39.862879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:38:39.888165 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:38:39.925499 kernel: loop2: detected capacity change from 0 to 8 Jan 24 00:38:39.941528 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:38:39.976866 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:38:40.001852 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:38:40.018863 kernel: loop6: detected capacity change from 0 to 8 Jan 24 00:38:40.022852 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:38:40.041185 (sd-merge)[1358]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 24 00:38:40.041883 (sd-merge)[1358]: Merged extensions into '/usr'. Jan 24 00:38:40.044969 systemd[1]: Reloading requested from client PID 1345 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:38:40.045049 systemd[1]: Reloading... Jan 24 00:38:40.112412 zram_generator::config[1387]: No configuration found. Jan 24 00:38:40.190570 ldconfig[1341]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:38:40.230395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:38:40.293801 systemd[1]: Reloading finished in 248 ms. Jan 24 00:38:40.312519 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:38:40.318348 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:38:40.327934 systemd[1]: Starting ensure-sysext.service... Jan 24 00:38:40.330958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:38:40.340779 systemd[1]: Reloading requested from client PID 1436 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:38:40.340870 systemd[1]: Reloading... Jan 24 00:38:40.376034 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:38:40.376751 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:38:40.378995 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:38:40.379546 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Jan 24 00:38:40.379724 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Jan 24 00:38:40.386448 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:38:40.386472 systemd-tmpfiles[1437]: Skipping /boot Jan 24 00:38:40.406312 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:38:40.406338 systemd-tmpfiles[1437]: Skipping /boot Jan 24 00:38:40.430335 zram_generator::config[1467]: No configuration found. Jan 24 00:38:40.497225 systemd-networkd[1265]: eth1: Gained IPv6LL Jan 24 00:38:40.526935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:38:40.590617 systemd[1]: Reloading finished in 249 ms. Jan 24 00:38:40.610285 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:38:40.618345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:38:40.623520 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:38:40.627950 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:38:40.632357 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:38:40.644887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:38:40.649675 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:38:40.657597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.658294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:38:40.662027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:38:40.677066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:38:40.685330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:38:40.686861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:38:40.686996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.698336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:38:40.698578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:38:40.705982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:38:40.706174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:38:40.711650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.711818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:38:40.723204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:38:40.730044 augenrules[1543]: No rules Jan 24 00:38:40.733219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:38:40.733697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:38:40.733772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.737786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:38:40.740209 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:38:40.745223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:38:40.747065 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:38:40.747233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:38:40.748668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:38:40.749417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:38:40.753341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:38:40.754012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:38:40.760474 systemd-resolved[1529]: Positive Trust Anchors: Jan 24 00:38:40.761018 systemd-resolved[1529]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:38:40.761100 systemd-resolved[1529]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:38:40.765896 systemd-resolved[1529]: Using system hostname 'ci-4081-3-6-n-3213f37a88'. Jan 24 00:38:40.767190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.767409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:38:40.773061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:38:40.776210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:38:40.782576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:38:40.795174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:38:40.797106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:38:40.806096 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:38:40.806701 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:38:40.809255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:38:40.811967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:38:40.812810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:38:40.813047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:38:40.813964 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:38:40.814192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:38:40.815032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:38:40.815234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:38:40.816059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:38:40.818359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:38:40.824541 systemd[1]: Finished ensure-sysext.service. Jan 24 00:38:40.832069 systemd[1]: Reached target network.target - Network. Jan 24 00:38:40.833407 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:38:40.833707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:38:40.835997 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:38:40.836055 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:38:40.843942 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:38:40.844937 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:38:40.845279 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:38:40.902859 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:38:40.905409 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:38:40.906159 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:38:40.906722 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:38:40.907249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:38:40.907799 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:38:40.907857 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:38:40.908337 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:38:40.909094 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:38:40.909686 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:38:40.910035 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:38:40.911673 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:38:40.913875 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:38:40.916237 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:38:40.917674 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:38:40.918016 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:38:40.918312 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:38:40.918769 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:38:40.918793 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:38:40.918813 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:38:40.921894 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:38:40.928964 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:38:40.930797 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:38:40.934914 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:38:40.946224 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:38:40.946675 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:38:40.949274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:40.951951 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:38:40.955681 coreos-metadata[1593]: Jan 24 00:38:40.955 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 24 00:38:40.957382 coreos-metadata[1593]: Jan 24 00:38:40.957 INFO Fetch successful Jan 24 00:38:40.958154 coreos-metadata[1593]: Jan 24 00:38:40.958 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 24 00:38:40.958476 coreos-metadata[1593]: Jan 24 00:38:40.958 INFO Fetch successful Jan 24 00:38:40.968732 jq[1596]: false Jan 24 00:38:40.966807 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:38:40.974572 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:38:40.979956 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 24 00:38:40.983282 dbus-daemon[1595]: [system] SELinux support is enabled Jan 24 00:38:40.987234 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:38:40.995978 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:38:41.007912 extend-filesystems[1599]: Found loop4 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found loop5 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found loop6 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found loop7 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda1 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda2 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda3 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found usr Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda4 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda6 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda7 Jan 24 00:38:41.015970 extend-filesystems[1599]: Found sda9 Jan 24 00:38:41.015970 extend-filesystems[1599]: Checking size of /dev/sda9 Jan 24 00:38:41.012169 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:38:41.019162 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:38:41.028781 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:38:41.037898 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:38:41.040734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:38:41.050092 jq[1630]: true Jan 24 00:38:41.054484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:38:41.054734 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:38:41.064025 update_engine[1626]: I20260124 00:38:41.063516 1626 main.cc:92] Flatcar Update Engine starting Jan 24 00:38:41.075664 update_engine[1626]: I20260124 00:38:41.065120 1626 update_check_scheduler.cc:74] Next update check in 9m41s Jan 24 00:38:41.067309 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:38:41.067636 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:38:41.069019 systemd-timesyncd[1587]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). Jan 24 00:38:41.069068 systemd-timesyncd[1587]: Initial clock synchronization to Sat 2026-01-24 00:38:41.295575 UTC. Jan 24 00:38:41.082940 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:38:41.083240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:38:41.094713 extend-filesystems[1599]: Resized partition /dev/sda9 Jan 24 00:38:41.098908 extend-filesystems[1642]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:38:41.119646 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 24 00:38:41.101978 systemd-logind[1621]: New seat seat0. Jan 24 00:38:41.103759 systemd-logind[1621]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:38:41.103777 systemd-logind[1621]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:38:41.132276 jq[1643]: true Jan 24 00:38:41.107865 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:38:41.108555 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:38:41.133534 (ntainerd)[1646]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:38:41.158934 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:38:41.161520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:38:41.161567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:38:41.162435 dbus-daemon[1595]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:38:41.166237 tar[1639]: linux-amd64/LICENSE Jan 24 00:38:41.166695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:38:41.166724 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:38:41.170336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:38:41.177407 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:38:41.182980 tar[1639]: linux-amd64/helm Jan 24 00:38:41.261792 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1283) Jan 24 00:38:41.254171 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:38:41.256208 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:38:41.265292 systemd-networkd[1265]: eth0: Gained IPv6LL Jan 24 00:38:41.303332 bash[1682]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:38:41.297793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:38:41.317025 systemd[1]: Starting sshkeys.service... Jan 24 00:38:41.347858 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:38:41.356054 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:38:41.412007 locksmithd[1669]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:38:41.423248 coreos-metadata[1698]: Jan 24 00:38:41.422 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 24 00:38:41.423830 coreos-metadata[1698]: Jan 24 00:38:41.423 INFO Fetch successful Jan 24 00:38:41.436437 unknown[1698]: wrote ssh authorized keys file for user: core Jan 24 00:38:41.445229 containerd[1646]: time="2026-01-24T00:38:41.445169178Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:38:41.506561 containerd[1646]: time="2026-01-24T00:38:41.506517734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.508363 containerd[1646]: time="2026-01-24T00:38:41.508340195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:38:41.508398 containerd[1646]: time="2026-01-24T00:38:41.508385295Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:38:41.508412 containerd[1646]: time="2026-01-24T00:38:41.508400955Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:38:41.509768 containerd[1646]: time="2026-01-24T00:38:41.509752795Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:38:41.509793 containerd[1646]: time="2026-01-24T00:38:41.509783655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.509867 containerd[1646]: time="2026-01-24T00:38:41.509853735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:38:41.509902 containerd[1646]: time="2026-01-24T00:38:41.509866475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.510058845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.510070525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.510079355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.510085975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.510956466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.515786 containerd[1646]: time="2026-01-24T00:38:41.511143076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:38:41.512554 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:38:41.515980 update-ssh-keys[1707]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:38:41.517666 systemd[1]: Finished sshkeys.service. Jan 24 00:38:41.519992 containerd[1646]: time="2026-01-24T00:38:41.511756796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:38:41.519992 containerd[1646]: time="2026-01-24T00:38:41.519969939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:38:41.520322 containerd[1646]: time="2026-01-24T00:38:41.520063649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:38:41.520322 containerd[1646]: time="2026-01-24T00:38:41.520107480Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:38:41.546536 containerd[1646]: time="2026-01-24T00:38:41.546398590Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:38:41.546536 containerd[1646]: time="2026-01-24T00:38:41.546436540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:38:41.546536 containerd[1646]: time="2026-01-24T00:38:41.546448970Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:38:41.546536 containerd[1646]: time="2026-01-24T00:38:41.546460140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:38:41.546536 containerd[1646]: time="2026-01-24T00:38:41.546471260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:38:41.546652 containerd[1646]: time="2026-01-24T00:38:41.546596091Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546814971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546913151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546924511Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546933821Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546943801Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546952691Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546961531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546971511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546982321Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.546991371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.547000141Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.547009271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.547038391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547271 containerd[1646]: time="2026-01-24T00:38:41.547049331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547057401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547066461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547075271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547084151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547092161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547100361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547109021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547118441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547126591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547134751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547142901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547154041Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547167731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547175691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547617 containerd[1646]: time="2026-01-24T00:38:41.547187151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547216681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547227461Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547235981Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547244391Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547250941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547259321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547266651Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:38:41.547812 containerd[1646]: time="2026-01-24T00:38:41.547274341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:38:41.547929 containerd[1646]: time="2026-01-24T00:38:41.547475531Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:38:41.547929 containerd[1646]: time="2026-01-24T00:38:41.547513641Z" level=info msg="Connect containerd service" Jan 24 00:38:41.547929 containerd[1646]: time="2026-01-24T00:38:41.547538201Z" level=info msg="using legacy CRI server" Jan 24 00:38:41.547929 containerd[1646]: time="2026-01-24T00:38:41.547542981Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:38:41.547929 containerd[1646]: time="2026-01-24T00:38:41.547611301Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551227332Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551348143Z" level=info msg="Start subscribing containerd event" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551395363Z" level=info msg="Start recovering state" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551630353Z" level=info msg="Start event monitor" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551653363Z" level=info msg="Start snapshots syncer" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551660553Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:38:41.552273 containerd[1646]: time="2026-01-24T00:38:41.551667653Z" level=info msg="Start streaming server" Jan 24 00:38:41.555087 containerd[1646]: time="2026-01-24T00:38:41.554756104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:38:41.555087 containerd[1646]: time="2026-01-24T00:38:41.554831774Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:38:41.554970 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:38:41.565262 containerd[1646]: time="2026-01-24T00:38:41.565126958Z" level=info msg="containerd successfully booted in 0.124604s" Jan 24 00:38:41.580100 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 24 00:38:41.605130 extend-filesystems[1642]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:38:41.605130 extend-filesystems[1642]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:38:41.605130 extend-filesystems[1642]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 24 00:38:41.617204 extend-filesystems[1599]: Resized filesystem in /dev/sda9 Jan 24 00:38:41.617204 extend-filesystems[1599]: Found sr0 Jan 24 00:38:41.609394 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:38:41.609685 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:38:41.644702 sshd_keygen[1628]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:38:41.690834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:38:41.705782 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:38:41.721368 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:38:41.721615 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:38:41.733045 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:38:41.750173 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:38:41.757185 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:38:41.763236 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:38:41.765117 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:38:41.979750 tar[1639]: linux-amd64/README.md Jan 24 00:38:41.996151 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:38:42.374090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:42.376787 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:38:42.379828 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:38:42.382641 systemd[1]: Startup finished in 8.288s (kernel) + 4.858s (userspace) = 13.147s. Jan 24 00:38:43.021357 kubelet[1751]: E0124 00:38:43.021244 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:38:43.027774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:38:43.028110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:38:45.755169 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:38:45.761473 systemd[1]: Started sshd@0-46.62.237.128:22-20.161.92.111:59440.service - OpenSSH per-connection server daemon (20.161.92.111:59440). Jan 24 00:38:46.536943 sshd[1765]: Accepted publickey for core from 20.161.92.111 port 59440 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:46.540632 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:46.557085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:38:46.567140 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:38:46.569278 systemd-logind[1621]: New session 1 of user core. Jan 24 00:38:46.595448 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:38:46.602558 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:38:46.607833 (systemd)[1771]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:38:46.741145 systemd[1771]: Queued start job for default target default.target. Jan 24 00:38:46.741498 systemd[1771]: Created slice app.slice - User Application Slice. Jan 24 00:38:46.741514 systemd[1771]: Reached target paths.target - Paths. Jan 24 00:38:46.741525 systemd[1771]: Reached target timers.target - Timers. Jan 24 00:38:46.749911 systemd[1771]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:38:46.761798 systemd[1771]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:38:46.761882 systemd[1771]: Reached target sockets.target - Sockets. Jan 24 00:38:46.761894 systemd[1771]: Reached target basic.target - Basic System. Jan 24 00:38:46.761931 systemd[1771]: Reached target default.target - Main User Target. Jan 24 00:38:46.761961 systemd[1771]: Startup finished in 144ms. Jan 24 00:38:46.762100 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:38:46.771188 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:38:47.322206 systemd[1]: Started sshd@1-46.62.237.128:22-20.161.92.111:59442.service - OpenSSH per-connection server daemon (20.161.92.111:59442). Jan 24 00:38:48.098463 sshd[1783]: Accepted publickey for core from 20.161.92.111 port 59442 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:48.101299 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:48.109945 systemd-logind[1621]: New session 2 of user core. Jan 24 00:38:48.120332 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:38:48.645329 sshd[1783]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:48.653002 systemd[1]: sshd@1-46.62.237.128:22-20.161.92.111:59442.service: Deactivated successfully. Jan 24 00:38:48.658161 systemd-logind[1621]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:38:48.659317 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:38:48.661230 systemd-logind[1621]: Removed session 2. Jan 24 00:38:48.777301 systemd[1]: Started sshd@2-46.62.237.128:22-20.161.92.111:59458.service - OpenSSH per-connection server daemon (20.161.92.111:59458). Jan 24 00:38:49.562456 sshd[1791]: Accepted publickey for core from 20.161.92.111 port 59458 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:49.564855 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:49.570429 systemd-logind[1621]: New session 3 of user core. Jan 24 00:38:49.584340 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:38:50.102071 sshd[1791]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:50.108334 systemd[1]: sshd@2-46.62.237.128:22-20.161.92.111:59458.service: Deactivated successfully. Jan 24 00:38:50.114168 systemd-logind[1621]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:38:50.116471 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:38:50.120136 systemd-logind[1621]: Removed session 3. Jan 24 00:38:50.231544 systemd[1]: Started sshd@3-46.62.237.128:22-20.161.92.111:59468.service - OpenSSH per-connection server daemon (20.161.92.111:59468). Jan 24 00:38:51.017085 sshd[1799]: Accepted publickey for core from 20.161.92.111 port 59468 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:51.020466 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:51.028648 systemd-logind[1621]: New session 4 of user core. Jan 24 00:38:51.038322 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:38:51.560865 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:51.565771 systemd[1]: sshd@3-46.62.237.128:22-20.161.92.111:59468.service: Deactivated successfully. Jan 24 00:38:51.573779 systemd-logind[1621]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:38:51.574530 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:38:51.576717 systemd-logind[1621]: Removed session 4. Jan 24 00:38:51.690571 systemd[1]: Started sshd@4-46.62.237.128:22-20.161.92.111:59476.service - OpenSSH per-connection server daemon (20.161.92.111:59476). Jan 24 00:38:52.475370 sshd[1807]: Accepted publickey for core from 20.161.92.111 port 59476 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:52.478132 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:52.485942 systemd-logind[1621]: New session 5 of user core. Jan 24 00:38:52.495304 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:38:52.898389 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:38:52.898723 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:38:52.913539 sudo[1811]: pam_unix(sudo:session): session closed for user root Jan 24 00:38:53.038231 sshd[1807]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:53.046767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:38:53.049197 systemd[1]: sshd@4-46.62.237.128:22-20.161.92.111:59476.service: Deactivated successfully. Jan 24 00:38:53.055128 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:38:53.056950 systemd-logind[1621]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:38:53.062727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:53.064011 systemd-logind[1621]: Removed session 5. Jan 24 00:38:53.175271 systemd[1]: Started sshd@5-46.62.237.128:22-20.161.92.111:48032.service - OpenSSH per-connection server daemon (20.161.92.111:48032). Jan 24 00:38:53.208134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:53.208301 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:38:53.249938 kubelet[1829]: E0124 00:38:53.249877 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:38:53.254813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:38:53.256371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:38:53.963057 sshd[1820]: Accepted publickey for core from 20.161.92.111 port 48032 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:53.965803 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:53.974928 systemd-logind[1621]: New session 6 of user core. Jan 24 00:38:53.985497 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:38:54.382433 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:38:54.383212 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:38:54.389607 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 24 00:38:54.401050 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:38:54.401697 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:38:54.426567 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:38:54.430244 auditctl[1844]: No rules Jan 24 00:38:54.431137 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:38:54.431746 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:38:54.442412 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:38:54.494641 augenrules[1863]: No rules Jan 24 00:38:54.497701 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:38:54.503608 sudo[1840]: pam_unix(sudo:session): session closed for user root Jan 24 00:38:54.628262 sshd[1820]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:54.633948 systemd[1]: sshd@5-46.62.237.128:22-20.161.92.111:48032.service: Deactivated successfully. Jan 24 00:38:54.639461 systemd-logind[1621]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:38:54.641651 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:38:54.645147 systemd-logind[1621]: Removed session 6. Jan 24 00:38:54.756707 systemd[1]: Started sshd@6-46.62.237.128:22-20.161.92.111:48044.service - OpenSSH per-connection server daemon (20.161.92.111:48044). Jan 24 00:38:55.536253 sshd[1872]: Accepted publickey for core from 20.161.92.111 port 48044 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:38:55.538986 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:55.547764 systemd-logind[1621]: New session 7 of user core. Jan 24 00:38:55.554326 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:38:55.955480 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:38:55.956402 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:38:56.411188 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:38:56.424727 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:38:56.840772 dockerd[1893]: time="2026-01-24T00:38:56.840683053Z" level=info msg="Starting up" Jan 24 00:38:56.952527 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3647361130-merged.mount: Deactivated successfully. Jan 24 00:38:57.006641 systemd[1]: var-lib-docker-metacopy\x2dcheck2846853297-merged.mount: Deactivated successfully. Jan 24 00:38:57.042596 dockerd[1893]: time="2026-01-24T00:38:57.042527120Z" level=info msg="Loading containers: start." Jan 24 00:38:57.243967 kernel: Initializing XFRM netlink socket Jan 24 00:38:57.386313 systemd-networkd[1265]: docker0: Link UP Jan 24 00:38:57.410723 dockerd[1893]: time="2026-01-24T00:38:57.410666288Z" level=info msg="Loading containers: done." Jan 24 00:38:57.431957 dockerd[1893]: time="2026-01-24T00:38:57.431891511Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:38:57.432191 dockerd[1893]: time="2026-01-24T00:38:57.431996863Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:38:57.432191 dockerd[1893]: time="2026-01-24T00:38:57.432145388Z" level=info msg="Daemon has completed initialization" Jan 24 00:38:57.473067 dockerd[1893]: time="2026-01-24T00:38:57.472981123Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:38:57.473633 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:38:58.657695 containerd[1646]: time="2026-01-24T00:38:58.657601893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:38:59.280215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135219943.mount: Deactivated successfully. Jan 24 00:39:00.748457 containerd[1646]: time="2026-01-24T00:39:00.748364191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:00.750163 containerd[1646]: time="2026-01-24T00:39:00.749839861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070747" Jan 24 00:39:00.752857 containerd[1646]: time="2026-01-24T00:39:00.751129988Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:00.754665 containerd[1646]: time="2026-01-24T00:39:00.754622712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:00.755444 containerd[1646]: time="2026-01-24T00:39:00.755409603Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.097757848s" Jan 24 00:39:00.755444 containerd[1646]: time="2026-01-24T00:39:00.755440371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:39:00.757127 containerd[1646]: time="2026-01-24T00:39:00.757087571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:39:02.050922 containerd[1646]: time="2026-01-24T00:39:02.050878755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:02.052034 containerd[1646]: time="2026-01-24T00:39:02.051849775Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993376" Jan 24 00:39:02.054438 containerd[1646]: time="2026-01-24T00:39:02.052858019Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:02.058010 containerd[1646]: time="2026-01-24T00:39:02.057071157Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.299947788s" Jan 24 00:39:02.058010 containerd[1646]: time="2026-01-24T00:39:02.057094432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:39:02.058010 containerd[1646]: time="2026-01-24T00:39:02.057423446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:02.058195 containerd[1646]: time="2026-01-24T00:39:02.058175188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:39:03.298672 containerd[1646]: time="2026-01-24T00:39:03.298609979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:03.300379 containerd[1646]: time="2026-01-24T00:39:03.300072455Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405098" Jan 24 00:39:03.302865 containerd[1646]: time="2026-01-24T00:39:03.301358760Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:03.304496 containerd[1646]: time="2026-01-24T00:39:03.304469479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:03.305683 containerd[1646]: time="2026-01-24T00:39:03.305651955Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.24745768s" Jan 24 00:39:03.305683 containerd[1646]: time="2026-01-24T00:39:03.305677138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:39:03.306532 containerd[1646]: time="2026-01-24T00:39:03.306506069Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:39:03.473725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:39:03.482129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:03.674928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:03.690616 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:39:03.751838 kubelet[2107]: E0124 00:39:03.751749 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:39:03.757679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:39:03.759377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:39:04.554746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704079460.mount: Deactivated successfully. Jan 24 00:39:04.951798 containerd[1646]: time="2026-01-24T00:39:04.951677688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:04.952885 containerd[1646]: time="2026-01-24T00:39:04.952815604Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161927" Jan 24 00:39:04.953929 containerd[1646]: time="2026-01-24T00:39:04.953899760Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:04.955892 containerd[1646]: time="2026-01-24T00:39:04.955862023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:04.956729 containerd[1646]: time="2026-01-24T00:39:04.956576714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.649962111s" Jan 24 00:39:04.956729 containerd[1646]: time="2026-01-24T00:39:04.956631957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:39:04.957731 containerd[1646]: time="2026-01-24T00:39:04.957519708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:39:05.511353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844989142.mount: Deactivated successfully. Jan 24 00:39:06.616059 containerd[1646]: time="2026-01-24T00:39:06.615982663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:06.617346 containerd[1646]: time="2026-01-24T00:39:06.617310238Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jan 24 00:39:06.619406 containerd[1646]: time="2026-01-24T00:39:06.618023637Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:06.621382 containerd[1646]: time="2026-01-24T00:39:06.620309773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:06.621382 containerd[1646]: time="2026-01-24T00:39:06.621027497Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.663476224s" Jan 24 00:39:06.621382 containerd[1646]: time="2026-01-24T00:39:06.621060645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:39:06.621679 containerd[1646]: time="2026-01-24T00:39:06.621651183Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:39:07.103148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647979481.mount: Deactivated successfully. Jan 24 00:39:07.109023 containerd[1646]: time="2026-01-24T00:39:07.108929393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:07.110389 containerd[1646]: time="2026-01-24T00:39:07.110308920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 24 00:39:07.112862 containerd[1646]: time="2026-01-24T00:39:07.111286752Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:07.114936 containerd[1646]: time="2026-01-24T00:39:07.114886902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:07.116153 containerd[1646]: time="2026-01-24T00:39:07.116096337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.335099ms" Jan 24 00:39:07.116153 containerd[1646]: time="2026-01-24T00:39:07.116149511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:39:07.117071 containerd[1646]: time="2026-01-24T00:39:07.117015332Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:39:07.691809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076176705.mount: Deactivated successfully. Jan 24 00:39:09.369226 containerd[1646]: time="2026-01-24T00:39:09.369189095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:09.370078 containerd[1646]: time="2026-01-24T00:39:09.369966414Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Jan 24 00:39:09.370937 containerd[1646]: time="2026-01-24T00:39:09.370631766Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:09.372636 containerd[1646]: time="2026-01-24T00:39:09.372609348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:09.373635 containerd[1646]: time="2026-01-24T00:39:09.373433883Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.256378152s" Jan 24 00:39:09.373635 containerd[1646]: time="2026-01-24T00:39:09.373455159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:39:13.594388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:13.600025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:13.640600 systemd[1]: Reloading requested from client PID 2262 ('systemctl') (unit session-7.scope)... Jan 24 00:39:13.640631 systemd[1]: Reloading... Jan 24 00:39:13.784882 zram_generator::config[2306]: No configuration found. Jan 24 00:39:13.873079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:13.931860 systemd[1]: Reloading finished in 290 ms. Jan 24 00:39:13.976744 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:39:13.976862 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:39:13.977194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:13.984464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:14.107943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:14.120677 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:39:14.190179 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:14.190179 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:39:14.190179 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:14.190179 kubelet[2365]: I0124 00:39:14.189545 2365 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:39:14.351838 kubelet[2365]: I0124 00:39:14.351799 2365 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:39:14.351953 kubelet[2365]: I0124 00:39:14.351945 2365 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:39:14.352186 kubelet[2365]: I0124 00:39:14.352178 2365 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:39:14.373314 kubelet[2365]: I0124 00:39:14.373295 2365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:39:14.376300 kubelet[2365]: E0124 00:39:14.376251 2365 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.62.237.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:14.380845 kubelet[2365]: E0124 00:39:14.378755 2365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:39:14.380845 kubelet[2365]: I0124 00:39:14.378774 2365 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:39:14.381724 kubelet[2365]: I0124 00:39:14.381687 2365 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:39:14.382068 kubelet[2365]: I0124 00:39:14.381998 2365 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:39:14.382133 kubelet[2365]: I0124 00:39:14.382018 2365 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3213f37a88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:39:14.382906 kubelet[2365]: I0124 00:39:14.382869 2365 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:39:14.382906 kubelet[2365]: I0124 00:39:14.382883 2365 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:39:14.382999 kubelet[2365]: I0124 00:39:14.382994 2365 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:14.386710 kubelet[2365]: I0124 00:39:14.386684 2365 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:39:14.386710 kubelet[2365]: I0124 00:39:14.386706 2365 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:39:14.386710 kubelet[2365]: I0124 00:39:14.386719 2365 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:39:14.386710 kubelet[2365]: I0124 00:39:14.386726 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:39:14.393873 kubelet[2365]: W0124 00:39:14.392556 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.237.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:14.393873 kubelet[2365]: E0124 00:39:14.392611 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.237.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:14.393873 kubelet[2365]: W0124 00:39:14.392674 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.237.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3213f37a88&limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:14.393873 kubelet[2365]: E0124 00:39:14.392692 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.237.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3213f37a88&limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:14.393873 kubelet[2365]: I0124 00:39:14.392782 2365 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:39:14.393873 kubelet[2365]: I0124 00:39:14.393118 2365 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:39:14.393873 kubelet[2365]: W0124 00:39:14.393173 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:39:14.394886 kubelet[2365]: I0124 00:39:14.394862 2365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:39:14.394886 kubelet[2365]: I0124 00:39:14.394890 2365 server.go:1287] "Started kubelet" Jan 24 00:39:14.408574 kubelet[2365]: I0124 00:39:14.408529 2365 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:39:14.408759 kubelet[2365]: E0124 00:39:14.405890 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.237.128:6443/api/v1/namespaces/default/events\": dial tcp 46.62.237.128:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-3213f37a88.188d83cf28f1ca64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-3213f37a88,UID:ci-4081-3-6-n-3213f37a88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-3213f37a88,},FirstTimestamp:2026-01-24 00:39:14.394876516 +0000 UTC m=+0.267017301,LastTimestamp:2026-01-24 00:39:14.394876516 +0000 UTC m=+0.267017301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-3213f37a88,}" Jan 24 00:39:14.411208 kubelet[2365]: I0124 00:39:14.409178 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:39:14.411208 kubelet[2365]: I0124 00:39:14.409564 2365 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:39:14.411208 kubelet[2365]: I0124 00:39:14.410536 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:39:14.413648 kubelet[2365]: I0124 00:39:14.413612 2365 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:39:14.415615 kubelet[2365]: I0124 00:39:14.415581 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:39:14.416209 kubelet[2365]: I0124 00:39:14.416178 2365 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:39:14.416276 kubelet[2365]: I0124 00:39:14.416235 2365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:39:14.416276 kubelet[2365]: I0124 00:39:14.416270 2365 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:39:14.416998 kubelet[2365]: W0124 00:39:14.416959 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.237.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:14.416998 kubelet[2365]: E0124 00:39:14.416995 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.237.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:14.417414 kubelet[2365]: E0124 00:39:14.417349 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:14.417809 kubelet[2365]: I0124 00:39:14.417418 2365 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:39:14.418233 kubelet[2365]: I0124 00:39:14.418194 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:39:14.419090 kubelet[2365]: I0124 00:39:14.419060 2365 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:39:14.419364 kubelet[2365]: E0124 00:39:14.419301 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.237.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3213f37a88?timeout=10s\": dial tcp 46.62.237.128:6443: connect: connection refused" interval="200ms" Jan 24 00:39:14.426053 kubelet[2365]: E0124 00:39:14.426023 2365 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:39:14.457135 kubelet[2365]: I0124 00:39:14.455638 2365 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:39:14.457135 kubelet[2365]: I0124 00:39:14.455650 2365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:39:14.457135 kubelet[2365]: I0124 00:39:14.455664 2365 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:14.459103 kubelet[2365]: I0124 00:39:14.459085 2365 policy_none.go:49] "None policy: Start" Jan 24 00:39:14.459171 kubelet[2365]: I0124 00:39:14.459165 2365 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:39:14.459207 kubelet[2365]: I0124 00:39:14.459201 2365 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:39:14.463615 kubelet[2365]: I0124 00:39:14.463601 2365 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:39:14.463814 kubelet[2365]: I0124 00:39:14.463804 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:39:14.463896 kubelet[2365]: I0124 00:39:14.463878 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:39:14.465306 kubelet[2365]: I0124 00:39:14.465294 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:39:14.465727 kubelet[2365]: I0124 00:39:14.465708 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:39:14.468114 kubelet[2365]: I0124 00:39:14.468102 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:39:14.468164 kubelet[2365]: I0124 00:39:14.468158 2365 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:39:14.468203 kubelet[2365]: I0124 00:39:14.468196 2365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:39:14.468238 kubelet[2365]: I0124 00:39:14.468233 2365 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:39:14.468295 kubelet[2365]: E0124 00:39:14.468287 2365 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:39:14.486594 kubelet[2365]: W0124 00:39:14.486523 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.237.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:14.486653 kubelet[2365]: E0124 00:39:14.486619 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.237.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:14.487640 kubelet[2365]: E0124 00:39:14.487625 2365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:39:14.487720 kubelet[2365]: E0124 00:39:14.487712 2365 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:14.567129 kubelet[2365]: I0124 00:39:14.567042 2365 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.567560 kubelet[2365]: E0124 00:39:14.567479 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.237.128:6443/api/v1/nodes\": dial tcp 46.62.237.128:6443: connect: connection refused" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.582872 kubelet[2365]: E0124 00:39:14.582109 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.588687 kubelet[2365]: E0124 00:39:14.588648 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.594162 kubelet[2365]: E0124 00:39:14.594134 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.617734 kubelet[2365]: I0124 00:39:14.617650 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.617936 kubelet[2365]: I0124 00:39:14.617773 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.617936 kubelet[2365]: I0124 00:39:14.617805 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.617936 kubelet[2365]: I0124 00:39:14.617896 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.618074 kubelet[2365]: I0124 00:39:14.617962 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.618074 kubelet[2365]: I0124 00:39:14.617989 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.618174 kubelet[2365]: I0124 00:39:14.618075 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.618174 kubelet[2365]: I0124 00:39:14.618098 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.618174 kubelet[2365]: I0124 00:39:14.618166 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa8a7cfff749e18658bb22ede28a78f4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3213f37a88\" (UID: \"fa8a7cfff749e18658bb22ede28a78f4\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.620358 kubelet[2365]: E0124 00:39:14.620318 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.237.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3213f37a88?timeout=10s\": dial tcp 46.62.237.128:6443: connect: connection refused" interval="400ms" Jan 24 00:39:14.770458 kubelet[2365]: I0124 00:39:14.770388 2365 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.771090 kubelet[2365]: E0124 00:39:14.771024 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.237.128:6443/api/v1/nodes\": dial tcp 46.62.237.128:6443: connect: connection refused" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:14.885454 containerd[1646]: time="2026-01-24T00:39:14.885381895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3213f37a88,Uid:9f0eae8e66a55252431479081818f6b7,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:14.890420 containerd[1646]: time="2026-01-24T00:39:14.890362268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3213f37a88,Uid:7b8cfa8ea2b4f5d953b28bd53184242f,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:14.896154 containerd[1646]: time="2026-01-24T00:39:14.896064438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3213f37a88,Uid:fa8a7cfff749e18658bb22ede28a78f4,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:15.021073 kubelet[2365]: E0124 00:39:15.020919 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.237.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3213f37a88?timeout=10s\": dial tcp 46.62.237.128:6443: connect: connection refused" interval="800ms" Jan 24 00:39:15.173619 kubelet[2365]: I0124 00:39:15.173568 2365 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:15.174034 kubelet[2365]: E0124 00:39:15.174001 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.237.128:6443/api/v1/nodes\": dial tcp 46.62.237.128:6443: connect: connection refused" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:15.292983 kubelet[2365]: W0124 00:39:15.292707 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.237.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:15.292983 kubelet[2365]: E0124 00:39:15.292881 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.237.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:15.330461 kubelet[2365]: W0124 00:39:15.330371 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.237.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:15.330461 kubelet[2365]: E0124 00:39:15.330463 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.237.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:15.359488 kubelet[2365]: W0124 00:39:15.359304 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.237.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:15.359488 kubelet[2365]: E0124 00:39:15.359380 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.237.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:15.375497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201284895.mount: Deactivated successfully. Jan 24 00:39:15.383676 containerd[1646]: time="2026-01-24T00:39:15.383568987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:15.385136 containerd[1646]: time="2026-01-24T00:39:15.385052344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:39:15.386019 containerd[1646]: time="2026-01-24T00:39:15.385958554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:15.387410 containerd[1646]: time="2026-01-24T00:39:15.387341867Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:15.388989 containerd[1646]: time="2026-01-24T00:39:15.388888800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 24 00:39:15.389665 containerd[1646]: time="2026-01-24T00:39:15.389535217Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:15.389665 containerd[1646]: time="2026-01-24T00:39:15.389586250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:39:15.399314 containerd[1646]: time="2026-01-24T00:39:15.399106071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:39:15.401986 containerd[1646]: time="2026-01-24T00:39:15.401936563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.488036ms" Jan 24 00:39:15.405484 containerd[1646]: time="2026-01-24T00:39:15.405195255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.015342ms" Jan 24 00:39:15.414621 containerd[1646]: time="2026-01-24T00:39:15.414542930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.053585ms" Jan 24 00:39:15.543148 kubelet[2365]: W0124 00:39:15.541981 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.237.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3213f37a88&limit=500&resourceVersion=0": dial tcp 46.62.237.128:6443: connect: connection refused Jan 24 00:39:15.543148 kubelet[2365]: E0124 00:39:15.542088 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.237.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3213f37a88&limit=500&resourceVersion=0\": dial tcp 46.62.237.128:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:39:15.574868 containerd[1646]: time="2026-01-24T00:39:15.574717893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:15.575394 containerd[1646]: time="2026-01-24T00:39:15.575332094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:15.575451 containerd[1646]: time="2026-01-24T00:39:15.575416382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.576067 containerd[1646]: time="2026-01-24T00:39:15.576009904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.576343 containerd[1646]: time="2026-01-24T00:39:15.576236414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:15.576970 containerd[1646]: time="2026-01-24T00:39:15.576480082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:15.576970 containerd[1646]: time="2026-01-24T00:39:15.576716037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.577926 containerd[1646]: time="2026-01-24T00:39:15.577758938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.591376 containerd[1646]: time="2026-01-24T00:39:15.590990742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:15.591376 containerd[1646]: time="2026-01-24T00:39:15.591093928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:15.591376 containerd[1646]: time="2026-01-24T00:39:15.591117198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.591376 containerd[1646]: time="2026-01-24T00:39:15.591254009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:15.657000 containerd[1646]: time="2026-01-24T00:39:15.656964129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3213f37a88,Uid:7b8cfa8ea2b4f5d953b28bd53184242f,Namespace:kube-system,Attempt:0,} returns sandbox id \"af0884dbcaac70dac6943dd49fe15ea6ee3c8c8f922c7382c72c39f4c78f9d94\"" Jan 24 00:39:15.663716 containerd[1646]: time="2026-01-24T00:39:15.663693076Z" level=info msg="CreateContainer within sandbox \"af0884dbcaac70dac6943dd49fe15ea6ee3c8c8f922c7382c72c39f4c78f9d94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:39:15.664480 containerd[1646]: time="2026-01-24T00:39:15.664454883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3213f37a88,Uid:9f0eae8e66a55252431479081818f6b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"39b953aa230dfed792e3667cdb9db8e3340dcd807cdba2c998882c3fd33fc4df\"" Jan 24 00:39:15.666310 containerd[1646]: time="2026-01-24T00:39:15.666294967Z" level=info msg="CreateContainer within sandbox \"39b953aa230dfed792e3667cdb9db8e3340dcd807cdba2c998882c3fd33fc4df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:39:15.675481 containerd[1646]: time="2026-01-24T00:39:15.675451548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3213f37a88,Uid:fa8a7cfff749e18658bb22ede28a78f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8015e92497f076e672c9c96d18d13b9a70d16642f774d4e54668258ec8f5462d\"" Jan 24 00:39:15.676782 containerd[1646]: time="2026-01-24T00:39:15.676760927Z" level=info msg="CreateContainer within sandbox \"8015e92497f076e672c9c96d18d13b9a70d16642f774d4e54668258ec8f5462d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:39:15.680601 containerd[1646]: time="2026-01-24T00:39:15.680579747Z" level=info msg="CreateContainer within sandbox \"af0884dbcaac70dac6943dd49fe15ea6ee3c8c8f922c7382c72c39f4c78f9d94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e\"" Jan 24 00:39:15.681025 containerd[1646]: time="2026-01-24T00:39:15.681011898Z" level=info msg="StartContainer for \"afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e\"" Jan 24 00:39:15.684312 containerd[1646]: time="2026-01-24T00:39:15.684254162Z" level=info msg="CreateContainer within sandbox \"39b953aa230dfed792e3667cdb9db8e3340dcd807cdba2c998882c3fd33fc4df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f049d37b6f23ede14e4ea121c466e986d11c4f1221fc331c62fb578b41f6642f\"" Jan 24 00:39:15.685240 containerd[1646]: time="2026-01-24T00:39:15.684539218Z" level=info msg="StartContainer for \"f049d37b6f23ede14e4ea121c466e986d11c4f1221fc331c62fb578b41f6642f\"" Jan 24 00:39:15.690021 containerd[1646]: time="2026-01-24T00:39:15.689999574Z" level=info msg="CreateContainer within sandbox \"8015e92497f076e672c9c96d18d13b9a70d16642f774d4e54668258ec8f5462d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d\"" Jan 24 00:39:15.690719 containerd[1646]: time="2026-01-24T00:39:15.690704236Z" level=info msg="StartContainer for \"ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d\"" Jan 24 00:39:15.780569 containerd[1646]: time="2026-01-24T00:39:15.780521231Z" level=info msg="StartContainer for \"afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e\" returns successfully" Jan 24 00:39:15.788143 containerd[1646]: time="2026-01-24T00:39:15.788110719Z" level=info msg="StartContainer for \"ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d\" returns successfully" Jan 24 00:39:15.790720 containerd[1646]: time="2026-01-24T00:39:15.790626913Z" level=info msg="StartContainer for \"f049d37b6f23ede14e4ea121c466e986d11c4f1221fc331c62fb578b41f6642f\" returns successfully" Jan 24 00:39:15.823913 kubelet[2365]: E0124 00:39:15.823155 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.237.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3213f37a88?timeout=10s\": dial tcp 46.62.237.128:6443: connect: connection refused" interval="1.6s" Jan 24 00:39:15.977475 kubelet[2365]: I0124 00:39:15.977449 2365 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:16.503330 kubelet[2365]: E0124 00:39:16.503297 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:16.514422 kubelet[2365]: E0124 00:39:16.514391 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:16.514749 kubelet[2365]: E0124 00:39:16.514728 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3213f37a88\" not found" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:16.820906 kubelet[2365]: I0124 00:39:16.819292 2365 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:16.820906 kubelet[2365]: E0124 00:39:16.819330 2365 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-3213f37a88\": node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:16.877043 kubelet[2365]: E0124 00:39:16.877007 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:16.977923 kubelet[2365]: E0124 00:39:16.977866 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:17.078481 kubelet[2365]: E0124 00:39:17.078176 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:17.179092 kubelet[2365]: E0124 00:39:17.179047 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:17.218682 kubelet[2365]: I0124 00:39:17.218648 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.223971 kubelet[2365]: E0124 00:39:17.223937 2365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.223971 kubelet[2365]: I0124 00:39:17.223957 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.225151 kubelet[2365]: E0124 00:39:17.224913 2365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3213f37a88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.225151 kubelet[2365]: I0124 00:39:17.224936 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.226038 kubelet[2365]: E0124 00:39:17.225991 2365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.392219 kubelet[2365]: I0124 00:39:17.392119 2365 apiserver.go:52] "Watching apiserver" Jan 24 00:39:17.416699 kubelet[2365]: I0124 00:39:17.416656 2365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:39:17.516906 kubelet[2365]: I0124 00:39:17.513818 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.516906 kubelet[2365]: I0124 00:39:17.514376 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.518442 kubelet[2365]: E0124 00:39:17.518381 2365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:17.521089 kubelet[2365]: E0124 00:39:17.521013 2365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3213f37a88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:18.516903 kubelet[2365]: I0124 00:39:18.516808 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:19.577715 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... Jan 24 00:39:19.577745 systemd[1]: Reloading... Jan 24 00:39:19.750860 zram_generator::config[2687]: No configuration found. Jan 24 00:39:19.838423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:39:19.901341 systemd[1]: Reloading finished in 322 ms. Jan 24 00:39:19.954545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:19.976581 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:39:19.977283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:19.986560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:39:20.121957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:39:20.136072 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:39:20.206374 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:20.206374 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:39:20.206374 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:39:20.206765 kubelet[2740]: I0124 00:39:20.206493 2740 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:39:20.217751 kubelet[2740]: I0124 00:39:20.216163 2740 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:39:20.217751 kubelet[2740]: I0124 00:39:20.216199 2740 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:39:20.217751 kubelet[2740]: I0124 00:39:20.216651 2740 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:39:20.219807 kubelet[2740]: I0124 00:39:20.219779 2740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:39:20.222893 kubelet[2740]: I0124 00:39:20.222377 2740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:39:20.226624 kubelet[2740]: E0124 00:39:20.226581 2740 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:39:20.226624 kubelet[2740]: I0124 00:39:20.226617 2740 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:39:20.232996 kubelet[2740]: I0124 00:39:20.232971 2740 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:39:20.233884 kubelet[2740]: I0124 00:39:20.233801 2740 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:39:20.234087 kubelet[2740]: I0124 00:39:20.233883 2740 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3213f37a88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:39:20.234155 kubelet[2740]: I0124 00:39:20.234100 2740 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:39:20.234155 kubelet[2740]: I0124 00:39:20.234114 2740 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:39:20.234188 kubelet[2740]: I0124 00:39:20.234179 2740 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:20.234540 kubelet[2740]: I0124 00:39:20.234393 2740 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:39:20.234540 kubelet[2740]: I0124 00:39:20.234426 2740 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:39:20.234540 kubelet[2740]: I0124 00:39:20.234446 2740 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:39:20.234540 kubelet[2740]: I0124 00:39:20.234459 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:39:20.235336 kubelet[2740]: I0124 00:39:20.235323 2740 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:39:20.235760 kubelet[2740]: I0124 00:39:20.235749 2740 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:39:20.236157 kubelet[2740]: I0124 00:39:20.236142 2740 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:39:20.236494 kubelet[2740]: I0124 00:39:20.236202 2740 server.go:1287] "Started kubelet" Jan 24 00:39:20.239511 kubelet[2740]: I0124 00:39:20.239488 2740 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:39:20.241793 kubelet[2740]: I0124 00:39:20.241728 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:39:20.242159 kubelet[2740]: I0124 00:39:20.242149 2740 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:39:20.243138 kubelet[2740]: I0124 00:39:20.243128 2740 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:39:20.245084 kubelet[2740]: I0124 00:39:20.245074 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:39:20.255893 kubelet[2740]: I0124 00:39:20.255877 2740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:39:20.256651 kubelet[2740]: I0124 00:39:20.256643 2740 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:39:20.256887 kubelet[2740]: E0124 00:39:20.256870 2740 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3213f37a88\" not found" Jan 24 00:39:20.258599 kubelet[2740]: I0124 00:39:20.258505 2740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:39:20.260373 kubelet[2740]: I0124 00:39:20.260164 2740 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:39:20.260373 kubelet[2740]: I0124 00:39:20.260259 2740 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:39:20.263099 kubelet[2740]: I0124 00:39:20.263050 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:39:20.264327 kubelet[2740]: I0124 00:39:20.264117 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:39:20.264327 kubelet[2740]: I0124 00:39:20.264136 2740 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:39:20.264327 kubelet[2740]: I0124 00:39:20.264151 2740 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:39:20.264327 kubelet[2740]: I0124 00:39:20.264156 2740 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:39:20.264327 kubelet[2740]: E0124 00:39:20.264191 2740 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:39:20.274880 kubelet[2740]: I0124 00:39:20.274625 2740 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:39:20.274880 kubelet[2740]: I0124 00:39:20.274642 2740 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:39:20.291259 kubelet[2740]: E0124 00:39:20.290958 2740 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:39:20.331109 kubelet[2740]: I0124 00:39:20.331061 2740 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:39:20.331109 kubelet[2740]: I0124 00:39:20.331079 2740 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:39:20.331109 kubelet[2740]: I0124 00:39:20.331094 2740 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:39:20.331327 kubelet[2740]: I0124 00:39:20.331225 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:39:20.331327 kubelet[2740]: I0124 00:39:20.331233 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:39:20.331327 kubelet[2740]: I0124 00:39:20.331248 2740 policy_none.go:49] "None policy: Start" Jan 24 00:39:20.331327 kubelet[2740]: I0124 00:39:20.331255 2740 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:39:20.331327 kubelet[2740]: I0124 00:39:20.331274 2740 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:39:20.331478 kubelet[2740]: I0124 00:39:20.331345 2740 state_mem.go:75] "Updated machine memory state" Jan 24 00:39:20.332557 kubelet[2740]: I0124 00:39:20.332518 2740 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:39:20.332709 kubelet[2740]: I0124 00:39:20.332680 2740 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:39:20.332767 kubelet[2740]: I0124 00:39:20.332707 2740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:39:20.334187 kubelet[2740]: E0124 00:39:20.334156 2740 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:39:20.335758 kubelet[2740]: I0124 00:39:20.335727 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:39:20.365385 kubelet[2740]: I0124 00:39:20.365333 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.366323 kubelet[2740]: I0124 00:39:20.366008 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.366323 kubelet[2740]: I0124 00:39:20.366162 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.393796 kubelet[2740]: E0124 00:39:20.393473 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.438422 kubelet[2740]: I0124 00:39:20.438364 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.453091 kubelet[2740]: I0124 00:39:20.453033 2740 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.453273 kubelet[2740]: I0124 00:39:20.453147 2740 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461842 kubelet[2740]: I0124 00:39:20.461786 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461977 kubelet[2740]: I0124 00:39:20.461870 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461977 kubelet[2740]: I0124 00:39:20.461902 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461977 kubelet[2740]: I0124 00:39:20.461930 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461977 kubelet[2740]: I0124 00:39:20.461955 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa8a7cfff749e18658bb22ede28a78f4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3213f37a88\" (UID: \"fa8a7cfff749e18658bb22ede28a78f4\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.461977 kubelet[2740]: I0124 00:39:20.461978 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f0eae8e66a55252431479081818f6b7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" (UID: \"9f0eae8e66a55252431479081818f6b7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.462183 kubelet[2740]: I0124 00:39:20.462002 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.462183 kubelet[2740]: I0124 00:39:20.462027 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.462183 kubelet[2740]: I0124 00:39:20.462053 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8cfa8ea2b4f5d953b28bd53184242f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3213f37a88\" (UID: \"7b8cfa8ea2b4f5d953b28bd53184242f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:20.579914 sudo[2774]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 00:39:20.580630 sudo[2774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 00:39:21.239887 sudo[2774]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:21.242367 kubelet[2740]: I0124 00:39:21.242223 2740 apiserver.go:52] "Watching apiserver" Jan 24 00:39:21.260740 kubelet[2740]: I0124 00:39:21.260694 2740 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:39:21.300860 kubelet[2740]: I0124 00:39:21.300337 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:21.319435 kubelet[2740]: E0124 00:39:21.319418 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3213f37a88\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" Jan 24 00:39:21.325160 kubelet[2740]: I0124 00:39:21.325133 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3213f37a88" podStartSLOduration=1.325124137 podStartE2EDuration="1.325124137s" podCreationTimestamp="2026-01-24 00:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:21.324379651 +0000 UTC m=+1.179750128" watchObservedRunningTime="2026-01-24 00:39:21.325124137 +0000 UTC m=+1.180494614" Jan 24 00:39:21.349714 kubelet[2740]: I0124 00:39:21.349673 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" podStartSLOduration=1.3496480260000001 podStartE2EDuration="1.349648026s" podCreationTimestamp="2026-01-24 00:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:21.337975483 +0000 UTC m=+1.193345960" watchObservedRunningTime="2026-01-24 00:39:21.349648026 +0000 UTC m=+1.205018503" Jan 24 00:39:22.919031 sudo[1876]: pam_unix(sudo:session): session closed for user root Jan 24 00:39:23.041299 sshd[1872]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:23.045502 systemd[1]: sshd@6-46.62.237.128:22-20.161.92.111:48044.service: Deactivated successfully. Jan 24 00:39:23.052006 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:39:23.054526 systemd-logind[1621]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:39:23.056271 systemd-logind[1621]: Removed session 7. Jan 24 00:39:25.260309 kubelet[2740]: I0124 00:39:25.260188 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3213f37a88" podStartSLOduration=7.26013995 podStartE2EDuration="7.26013995s" podCreationTimestamp="2026-01-24 00:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:21.350019994 +0000 UTC m=+1.205390471" watchObservedRunningTime="2026-01-24 00:39:25.26013995 +0000 UTC m=+5.115510467" Jan 24 00:39:25.511088 kubelet[2740]: I0124 00:39:25.510891 2740 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:39:25.511877 containerd[1646]: time="2026-01-24T00:39:25.511755035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:39:25.512765 kubelet[2740]: I0124 00:39:25.512333 2740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:39:26.081210 kubelet[2740]: I0124 00:39:26.080690 2740 status_manager.go:890] "Failed to get status for pod" podUID="16c9cb02-964e-466a-8c70-67e5f0795cb9" pod="kube-system/cilium-dhhf4" err="pods \"cilium-dhhf4\" is forbidden: User \"system:node:ci-4081-3-6-n-3213f37a88\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-3213f37a88' and this object" Jan 24 00:39:26.101143 kubelet[2740]: I0124 00:39:26.098097 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-hostproc\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.101396 kubelet[2740]: I0124 00:39:26.101366 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-config-path\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.103857 kubelet[2740]: I0124 00:39:26.101686 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe98c2f-1d08-4173-af9a-c03c212890c8-xtables-lock\") pod \"kube-proxy-h28cj\" (UID: \"bfe98c2f-1d08-4173-af9a-c03c212890c8\") " pod="kube-system/kube-proxy-h28cj" Jan 24 00:39:26.104490 kubelet[2740]: I0124 00:39:26.104060 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-bpf-maps\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104490 kubelet[2740]: I0124 00:39:26.104106 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpg2\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104490 kubelet[2740]: I0124 00:39:26.104153 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxd7c\" (UniqueName: \"kubernetes.io/projected/bfe98c2f-1d08-4173-af9a-c03c212890c8-kube-api-access-jxd7c\") pod \"kube-proxy-h28cj\" (UID: \"bfe98c2f-1d08-4173-af9a-c03c212890c8\") " pod="kube-system/kube-proxy-h28cj" Jan 24 00:39:26.104490 kubelet[2740]: I0124 00:39:26.104206 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfe98c2f-1d08-4173-af9a-c03c212890c8-kube-proxy\") pod \"kube-proxy-h28cj\" (UID: \"bfe98c2f-1d08-4173-af9a-c03c212890c8\") " pod="kube-system/kube-proxy-h28cj" Jan 24 00:39:26.104490 kubelet[2740]: I0124 00:39:26.104230 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe98c2f-1d08-4173-af9a-c03c212890c8-lib-modules\") pod \"kube-proxy-h28cj\" (UID: \"bfe98c2f-1d08-4173-af9a-c03c212890c8\") " pod="kube-system/kube-proxy-h28cj" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104253 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-cgroup\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104274 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-etc-cni-netd\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104296 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-run\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104318 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16c9cb02-964e-466a-8c70-67e5f0795cb9-clustermesh-secrets\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104340 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-hubble-tls\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104717 kubelet[2740]: I0124 00:39:26.104366 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-lib-modules\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104983 kubelet[2740]: I0124 00:39:26.104389 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-net\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104983 kubelet[2740]: I0124 00:39:26.104412 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-kernel\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104983 kubelet[2740]: I0124 00:39:26.104518 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cni-path\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.104983 kubelet[2740]: I0124 00:39:26.104567 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-xtables-lock\") pod \"cilium-dhhf4\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " pod="kube-system/cilium-dhhf4" Jan 24 00:39:26.221872 kubelet[2740]: E0124 00:39:26.221799 2740 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:39:26.221872 kubelet[2740]: E0124 00:39:26.221845 2740 projected.go:194] Error preparing data for projected volume kube-api-access-jxd7c for pod kube-system/kube-proxy-h28cj: configmap "kube-root-ca.crt" not found Jan 24 00:39:26.222719 kubelet[2740]: E0124 00:39:26.221906 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bfe98c2f-1d08-4173-af9a-c03c212890c8-kube-api-access-jxd7c podName:bfe98c2f-1d08-4173-af9a-c03c212890c8 nodeName:}" failed. No retries permitted until 2026-01-24 00:39:26.72188835 +0000 UTC m=+6.577258837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jxd7c" (UniqueName: "kubernetes.io/projected/bfe98c2f-1d08-4173-af9a-c03c212890c8-kube-api-access-jxd7c") pod "kube-proxy-h28cj" (UID: "bfe98c2f-1d08-4173-af9a-c03c212890c8") : configmap "kube-root-ca.crt" not found Jan 24 00:39:26.232089 kubelet[2740]: E0124 00:39:26.232050 2740 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:39:26.232089 kubelet[2740]: E0124 00:39:26.232083 2740 projected.go:194] Error preparing data for projected volume kube-api-access-6qpg2 for pod kube-system/cilium-dhhf4: configmap "kube-root-ca.crt" not found Jan 24 00:39:26.232222 kubelet[2740]: E0124 00:39:26.232118 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2 podName:16c9cb02-964e-466a-8c70-67e5f0795cb9 nodeName:}" failed. No retries permitted until 2026-01-24 00:39:26.732108896 +0000 UTC m=+6.587479383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6qpg2" (UniqueName: "kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2") pod "cilium-dhhf4" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9") : configmap "kube-root-ca.crt" not found Jan 24 00:39:26.238801 update_engine[1626]: I20260124 00:39:26.238191 1626 update_attempter.cc:509] Updating boot flags... Jan 24 00:39:26.333584 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2821) Jan 24 00:39:26.400225 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2820) Jan 24 00:39:26.710740 kubelet[2740]: I0124 00:39:26.710577 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c739d745-779f-46be-b64f-acedcdaa54ca-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4vjkg\" (UID: \"c739d745-779f-46be-b64f-acedcdaa54ca\") " pod="kube-system/cilium-operator-6c4d7847fc-4vjkg" Jan 24 00:39:26.711451 kubelet[2740]: I0124 00:39:26.711404 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77hl\" (UniqueName: \"kubernetes.io/projected/c739d745-779f-46be-b64f-acedcdaa54ca-kube-api-access-n77hl\") pod \"cilium-operator-6c4d7847fc-4vjkg\" (UID: \"c739d745-779f-46be-b64f-acedcdaa54ca\") " pod="kube-system/cilium-operator-6c4d7847fc-4vjkg" Jan 24 00:39:26.928564 containerd[1646]: time="2026-01-24T00:39:26.928498205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4vjkg,Uid:c739d745-779f-46be-b64f-acedcdaa54ca,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:26.964907 containerd[1646]: time="2026-01-24T00:39:26.964028212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:26.964907 containerd[1646]: time="2026-01-24T00:39:26.964122715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:26.964907 containerd[1646]: time="2026-01-24T00:39:26.964146911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:26.964907 containerd[1646]: time="2026-01-24T00:39:26.964295377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:26.989572 containerd[1646]: time="2026-01-24T00:39:26.988984626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhhf4,Uid:16c9cb02-964e-466a-8c70-67e5f0795cb9,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:26.999757 containerd[1646]: time="2026-01-24T00:39:26.999290043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h28cj,Uid:bfe98c2f-1d08-4173-af9a-c03c212890c8,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:27.040538 containerd[1646]: time="2026-01-24T00:39:27.040446940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:27.040893 containerd[1646]: time="2026-01-24T00:39:27.040785759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:27.041041 containerd[1646]: time="2026-01-24T00:39:27.040992136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:27.041488 containerd[1646]: time="2026-01-24T00:39:27.041428027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:27.053135 containerd[1646]: time="2026-01-24T00:39:27.052155228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:27.053477 containerd[1646]: time="2026-01-24T00:39:27.053315978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:27.053477 containerd[1646]: time="2026-01-24T00:39:27.053328461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:27.053988 containerd[1646]: time="2026-01-24T00:39:27.053869156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:27.096154 containerd[1646]: time="2026-01-24T00:39:27.096078739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h28cj,Uid:bfe98c2f-1d08-4173-af9a-c03c212890c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d74c9727210c40efc2a1a9d79c63a9c97957d3ccd5f0ad597014687f3b899935\"" Jan 24 00:39:27.099244 containerd[1646]: time="2026-01-24T00:39:27.099110183Z" level=info msg="CreateContainer within sandbox \"d74c9727210c40efc2a1a9d79c63a9c97957d3ccd5f0ad597014687f3b899935\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:39:27.116192 containerd[1646]: time="2026-01-24T00:39:27.116084575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhhf4,Uid:16c9cb02-964e-466a-8c70-67e5f0795cb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\"" Jan 24 00:39:27.117620 containerd[1646]: time="2026-01-24T00:39:27.117506496Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:39:27.124725 containerd[1646]: time="2026-01-24T00:39:27.124669169Z" level=info msg="CreateContainer within sandbox \"d74c9727210c40efc2a1a9d79c63a9c97957d3ccd5f0ad597014687f3b899935\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7748a2268f7626cfa2c95ebff1b91f90dc189aecc37642f9cea86df229cb63d9\"" Jan 24 00:39:27.127258 containerd[1646]: time="2026-01-24T00:39:27.127243097Z" level=info msg="StartContainer for \"7748a2268f7626cfa2c95ebff1b91f90dc189aecc37642f9cea86df229cb63d9\"" Jan 24 00:39:27.134087 containerd[1646]: time="2026-01-24T00:39:27.134050278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4vjkg,Uid:c739d745-779f-46be-b64f-acedcdaa54ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\"" Jan 24 00:39:27.183330 containerd[1646]: time="2026-01-24T00:39:27.183300905Z" level=info msg="StartContainer for \"7748a2268f7626cfa2c95ebff1b91f90dc189aecc37642f9cea86df229cb63d9\" returns successfully" Jan 24 00:39:27.339635 kubelet[2740]: I0124 00:39:27.339473 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h28cj" podStartSLOduration=1.33946086 podStartE2EDuration="1.33946086s" podCreationTimestamp="2026-01-24 00:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:27.338775991 +0000 UTC m=+7.194146508" watchObservedRunningTime="2026-01-24 00:39:27.33946086 +0000 UTC m=+7.194831347" Jan 24 00:39:27.857400 systemd[1]: Started sshd@7-46.62.237.128:22-112.78.1.94:32970.service - OpenSSH per-connection server daemon (112.78.1.94:32970). Jan 24 00:39:29.418435 sshd[3124]: Received disconnect from 112.78.1.94 port 32970:11: Bye Bye [preauth] Jan 24 00:39:29.418435 sshd[3124]: Disconnected from authenticating user root 112.78.1.94 port 32970 [preauth] Jan 24 00:39:29.420939 systemd[1]: sshd@7-46.62.237.128:22-112.78.1.94:32970.service: Deactivated successfully. Jan 24 00:39:31.371323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622264128.mount: Deactivated successfully. Jan 24 00:39:32.657144 containerd[1646]: time="2026-01-24T00:39:32.657097167Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:32.658122 containerd[1646]: time="2026-01-24T00:39:32.658019736Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:39:32.659253 containerd[1646]: time="2026-01-24T00:39:32.659037211Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:32.660240 containerd[1646]: time="2026-01-24T00:39:32.660211746Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.542684226s" Jan 24 00:39:32.660280 containerd[1646]: time="2026-01-24T00:39:32.660243972Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:39:32.662046 containerd[1646]: time="2026-01-24T00:39:32.661985920Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:39:32.663508 containerd[1646]: time="2026-01-24T00:39:32.663483414Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:39:32.675892 containerd[1646]: time="2026-01-24T00:39:32.675845392Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\"" Jan 24 00:39:32.676492 containerd[1646]: time="2026-01-24T00:39:32.676464044Z" level=info msg="StartContainer for \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\"" Jan 24 00:39:32.726684 containerd[1646]: time="2026-01-24T00:39:32.726657702Z" level=info msg="StartContainer for \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\" returns successfully" Jan 24 00:39:32.918481 containerd[1646]: time="2026-01-24T00:39:32.918170302Z" level=info msg="shim disconnected" id=bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979 namespace=k8s.io Jan 24 00:39:32.918481 containerd[1646]: time="2026-01-24T00:39:32.918239194Z" level=warning msg="cleaning up after shim disconnected" id=bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979 namespace=k8s.io Jan 24 00:39:32.918481 containerd[1646]: time="2026-01-24T00:39:32.918254947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:33.358772 containerd[1646]: time="2026-01-24T00:39:33.358718393Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:39:33.376271 containerd[1646]: time="2026-01-24T00:39:33.376194534Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\"" Jan 24 00:39:33.377276 containerd[1646]: time="2026-01-24T00:39:33.377223173Z" level=info msg="StartContainer for \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\"" Jan 24 00:39:33.483917 containerd[1646]: time="2026-01-24T00:39:33.483754734Z" level=info msg="StartContainer for \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\" returns successfully" Jan 24 00:39:33.506547 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:39:33.507142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:39:33.507269 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:39:33.525863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:39:33.570219 containerd[1646]: time="2026-01-24T00:39:33.570149661Z" level=info msg="shim disconnected" id=db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc namespace=k8s.io Jan 24 00:39:33.570864 containerd[1646]: time="2026-01-24T00:39:33.570534238Z" level=warning msg="cleaning up after shim disconnected" id=db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc namespace=k8s.io Jan 24 00:39:33.570864 containerd[1646]: time="2026-01-24T00:39:33.570555222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:33.577321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:39:33.676242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979-rootfs.mount: Deactivated successfully. Jan 24 00:39:34.374240 containerd[1646]: time="2026-01-24T00:39:34.374123096Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:39:34.429680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840814552.mount: Deactivated successfully. Jan 24 00:39:34.460751 containerd[1646]: time="2026-01-24T00:39:34.460708425Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\"" Jan 24 00:39:34.462419 containerd[1646]: time="2026-01-24T00:39:34.461444009Z" level=info msg="StartContainer for \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\"" Jan 24 00:39:34.557313 containerd[1646]: time="2026-01-24T00:39:34.556907259Z" level=info msg="StartContainer for \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\" returns successfully" Jan 24 00:39:34.600317 containerd[1646]: time="2026-01-24T00:39:34.600227868Z" level=info msg="shim disconnected" id=5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa namespace=k8s.io Jan 24 00:39:34.600317 containerd[1646]: time="2026-01-24T00:39:34.600271786Z" level=warning msg="cleaning up after shim disconnected" id=5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa namespace=k8s.io Jan 24 00:39:34.600317 containerd[1646]: time="2026-01-24T00:39:34.600278877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:34.930251 containerd[1646]: time="2026-01-24T00:39:34.930202756Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:34.931453 containerd[1646]: time="2026-01-24T00:39:34.931191301Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:39:34.933859 containerd[1646]: time="2026-01-24T00:39:34.932401713Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:39:34.934350 containerd[1646]: time="2026-01-24T00:39:34.934312742Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.272298748s" Jan 24 00:39:34.934442 containerd[1646]: time="2026-01-24T00:39:34.934421230Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:39:34.936400 containerd[1646]: time="2026-01-24T00:39:34.936367465Z" level=info msg="CreateContainer within sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:39:34.950224 containerd[1646]: time="2026-01-24T00:39:34.950170218Z" level=info msg="CreateContainer within sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\"" Jan 24 00:39:34.951004 containerd[1646]: time="2026-01-24T00:39:34.950939117Z" level=info msg="StartContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\"" Jan 24 00:39:35.021659 containerd[1646]: time="2026-01-24T00:39:35.021600440Z" level=info msg="StartContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" returns successfully" Jan 24 00:39:35.394284 containerd[1646]: time="2026-01-24T00:39:35.394209347Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:39:35.419118 containerd[1646]: time="2026-01-24T00:39:35.419043364Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\"" Jan 24 00:39:35.420894 containerd[1646]: time="2026-01-24T00:39:35.420119726Z" level=info msg="StartContainer for \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\"" Jan 24 00:39:35.506310 kubelet[2740]: I0124 00:39:35.506168 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4vjkg" podStartSLOduration=1.708803116 podStartE2EDuration="9.506153949s" podCreationTimestamp="2026-01-24 00:39:26 +0000 UTC" firstStartedPulling="2026-01-24 00:39:27.137677529 +0000 UTC m=+6.993048006" lastFinishedPulling="2026-01-24 00:39:34.935028332 +0000 UTC m=+14.790398839" observedRunningTime="2026-01-24 00:39:35.50609977 +0000 UTC m=+15.361470247" watchObservedRunningTime="2026-01-24 00:39:35.506153949 +0000 UTC m=+15.361524436" Jan 24 00:39:35.558842 containerd[1646]: time="2026-01-24T00:39:35.555971595Z" level=info msg="StartContainer for \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\" returns successfully" Jan 24 00:39:35.630598 containerd[1646]: time="2026-01-24T00:39:35.630416756Z" level=info msg="shim disconnected" id=5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d namespace=k8s.io Jan 24 00:39:35.630598 containerd[1646]: time="2026-01-24T00:39:35.630461444Z" level=warning msg="cleaning up after shim disconnected" id=5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d namespace=k8s.io Jan 24 00:39:35.630598 containerd[1646]: time="2026-01-24T00:39:35.630468295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:36.399724 containerd[1646]: time="2026-01-24T00:39:36.399662048Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:39:36.438132 containerd[1646]: time="2026-01-24T00:39:36.436223012Z" level=info msg="CreateContainer within sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\"" Jan 24 00:39:36.439333 containerd[1646]: time="2026-01-24T00:39:36.439266048Z" level=info msg="StartContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\"" Jan 24 00:39:36.441230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422522043.mount: Deactivated successfully. Jan 24 00:39:36.558891 containerd[1646]: time="2026-01-24T00:39:36.558693233Z" level=info msg="StartContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" returns successfully" Jan 24 00:39:36.634020 kubelet[2740]: I0124 00:39:36.633417 2740 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:39:36.884796 kubelet[2740]: I0124 00:39:36.884741 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df8c8f33-bb76-4f87-8915-165109aaa0f4-config-volume\") pod \"coredns-668d6bf9bc-5fdjz\" (UID: \"df8c8f33-bb76-4f87-8915-165109aaa0f4\") " pod="kube-system/coredns-668d6bf9bc-5fdjz" Jan 24 00:39:36.884796 kubelet[2740]: I0124 00:39:36.884793 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fd79a09-1d0e-4dfe-854d-10266f0a0ea8-config-volume\") pod \"coredns-668d6bf9bc-f4z8q\" (UID: \"3fd79a09-1d0e-4dfe-854d-10266f0a0ea8\") " pod="kube-system/coredns-668d6bf9bc-f4z8q" Jan 24 00:39:36.884796 kubelet[2740]: I0124 00:39:36.884810 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vhw\" (UniqueName: \"kubernetes.io/projected/df8c8f33-bb76-4f87-8915-165109aaa0f4-kube-api-access-g6vhw\") pod \"coredns-668d6bf9bc-5fdjz\" (UID: \"df8c8f33-bb76-4f87-8915-165109aaa0f4\") " pod="kube-system/coredns-668d6bf9bc-5fdjz" Jan 24 00:39:36.885162 kubelet[2740]: I0124 00:39:36.884834 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbp5r\" (UniqueName: \"kubernetes.io/projected/3fd79a09-1d0e-4dfe-854d-10266f0a0ea8-kube-api-access-nbp5r\") pod \"coredns-668d6bf9bc-f4z8q\" (UID: \"3fd79a09-1d0e-4dfe-854d-10266f0a0ea8\") " pod="kube-system/coredns-668d6bf9bc-f4z8q" Jan 24 00:39:37.052840 containerd[1646]: time="2026-01-24T00:39:37.050542847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fdjz,Uid:df8c8f33-bb76-4f87-8915-165109aaa0f4,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:37.053581 containerd[1646]: time="2026-01-24T00:39:37.053555008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f4z8q,Uid:3fd79a09-1d0e-4dfe-854d-10266f0a0ea8,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:37.431856 kubelet[2740]: I0124 00:39:37.431695 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dhhf4" podStartSLOduration=5.887620536 podStartE2EDuration="11.431652121s" podCreationTimestamp="2026-01-24 00:39:26 +0000 UTC" firstStartedPulling="2026-01-24 00:39:27.117087867 +0000 UTC m=+6.972458344" lastFinishedPulling="2026-01-24 00:39:32.661119452 +0000 UTC m=+12.516489929" observedRunningTime="2026-01-24 00:39:37.430076369 +0000 UTC m=+17.285446886" watchObservedRunningTime="2026-01-24 00:39:37.431652121 +0000 UTC m=+17.287022648" Jan 24 00:39:38.694186 systemd-networkd[1265]: cilium_host: Link UP Jan 24 00:39:38.694533 systemd-networkd[1265]: cilium_net: Link UP Jan 24 00:39:38.695986 systemd-networkd[1265]: cilium_net: Gained carrier Jan 24 00:39:38.698353 systemd-networkd[1265]: cilium_host: Gained carrier Jan 24 00:39:38.699600 systemd-networkd[1265]: cilium_net: Gained IPv6LL Jan 24 00:39:38.700388 systemd-networkd[1265]: cilium_host: Gained IPv6LL Jan 24 00:39:38.866952 systemd-networkd[1265]: cilium_vxlan: Link UP Jan 24 00:39:38.866964 systemd-networkd[1265]: cilium_vxlan: Gained carrier Jan 24 00:39:39.130892 kernel: NET: Registered PF_ALG protocol family Jan 24 00:39:39.950604 systemd-networkd[1265]: lxc_health: Link UP Jan 24 00:39:39.956351 systemd-networkd[1265]: lxc_health: Gained carrier Jan 24 00:39:40.081381 systemd-networkd[1265]: cilium_vxlan: Gained IPv6LL Jan 24 00:39:40.115323 systemd-networkd[1265]: lxcc11bc0643257: Link UP Jan 24 00:39:40.127968 kernel: eth0: renamed from tmpd72e1 Jan 24 00:39:40.148663 systemd-networkd[1265]: lxcc11bc0643257: Gained carrier Jan 24 00:39:40.151121 systemd-networkd[1265]: lxcd254b7154f30: Link UP Jan 24 00:39:40.170011 kernel: eth0: renamed from tmpef94d Jan 24 00:39:40.178219 systemd-networkd[1265]: lxcd254b7154f30: Gained carrier Jan 24 00:39:40.977937 systemd-networkd[1265]: lxc_health: Gained IPv6LL Jan 24 00:39:42.001085 systemd-networkd[1265]: lxcd254b7154f30: Gained IPv6LL Jan 24 00:39:42.194115 systemd-networkd[1265]: lxcc11bc0643257: Gained IPv6LL Jan 24 00:39:42.730841 containerd[1646]: time="2026-01-24T00:39:42.721309129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:42.730841 containerd[1646]: time="2026-01-24T00:39:42.721529535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:42.730841 containerd[1646]: time="2026-01-24T00:39:42.721555208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:42.730841 containerd[1646]: time="2026-01-24T00:39:42.721749152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:42.742314 containerd[1646]: time="2026-01-24T00:39:42.732287685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:42.742314 containerd[1646]: time="2026-01-24T00:39:42.732333720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:42.742314 containerd[1646]: time="2026-01-24T00:39:42.732365534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:42.742314 containerd[1646]: time="2026-01-24T00:39:42.732449854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:42.768213 systemd[1]: run-containerd-runc-k8s.io-d72e109a6d921c32a0446adc374837261a9e13e578d5608fef628a4d14f1da33-runc.xw2wz3.mount: Deactivated successfully. Jan 24 00:39:42.814139 containerd[1646]: time="2026-01-24T00:39:42.814110223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fdjz,Uid:df8c8f33-bb76-4f87-8915-165109aaa0f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef94dc180469b390f94ff74159378d11b1ece7cdedd06d8675c3859809092d37\"" Jan 24 00:39:42.817676 containerd[1646]: time="2026-01-24T00:39:42.817589813Z" level=info msg="CreateContainer within sandbox \"ef94dc180469b390f94ff74159378d11b1ece7cdedd06d8675c3859809092d37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:39:42.825862 containerd[1646]: time="2026-01-24T00:39:42.825770831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f4z8q,Uid:3fd79a09-1d0e-4dfe-854d-10266f0a0ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d72e109a6d921c32a0446adc374837261a9e13e578d5608fef628a4d14f1da33\"" Jan 24 00:39:42.830269 containerd[1646]: time="2026-01-24T00:39:42.830173173Z" level=info msg="CreateContainer within sandbox \"d72e109a6d921c32a0446adc374837261a9e13e578d5608fef628a4d14f1da33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:39:42.839063 containerd[1646]: time="2026-01-24T00:39:42.839028751Z" level=info msg="CreateContainer within sandbox \"ef94dc180469b390f94ff74159378d11b1ece7cdedd06d8675c3859809092d37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecff37006a9ee0e9133bc37740a2cafcd879350999d819a77464228897f3292b\"" Jan 24 00:39:42.839535 containerd[1646]: time="2026-01-24T00:39:42.839516831Z" level=info msg="StartContainer for \"ecff37006a9ee0e9133bc37740a2cafcd879350999d819a77464228897f3292b\"" Jan 24 00:39:42.849103 containerd[1646]: time="2026-01-24T00:39:42.849042051Z" level=info msg="CreateContainer within sandbox \"d72e109a6d921c32a0446adc374837261a9e13e578d5608fef628a4d14f1da33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6342382d2494ac9acf23937fc5d6bf566a805f6b23771cda86026592709f17f\"" Jan 24 00:39:42.849903 containerd[1646]: time="2026-01-24T00:39:42.849890693Z" level=info msg="StartContainer for \"b6342382d2494ac9acf23937fc5d6bf566a805f6b23771cda86026592709f17f\"" Jan 24 00:39:42.906678 containerd[1646]: time="2026-01-24T00:39:42.906640544Z" level=info msg="StartContainer for \"ecff37006a9ee0e9133bc37740a2cafcd879350999d819a77464228897f3292b\" returns successfully" Jan 24 00:39:42.909488 containerd[1646]: time="2026-01-24T00:39:42.909453905Z" level=info msg="StartContainer for \"b6342382d2494ac9acf23937fc5d6bf566a805f6b23771cda86026592709f17f\" returns successfully" Jan 24 00:39:43.450486 kubelet[2740]: I0124 00:39:43.450416 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f4z8q" podStartSLOduration=17.450393193 podStartE2EDuration="17.450393193s" podCreationTimestamp="2026-01-24 00:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:43.435269012 +0000 UTC m=+23.290639509" watchObservedRunningTime="2026-01-24 00:39:43.450393193 +0000 UTC m=+23.305763720" Jan 24 00:39:43.472012 kubelet[2740]: I0124 00:39:43.471940 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5fdjz" podStartSLOduration=17.47191655 podStartE2EDuration="17.47191655s" podCreationTimestamp="2026-01-24 00:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:43.452312157 +0000 UTC m=+23.307682704" watchObservedRunningTime="2026-01-24 00:39:43.47191655 +0000 UTC m=+23.327287077" Jan 24 00:39:49.620182 kubelet[2740]: I0124 00:39:49.619738 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:40:19.978463 systemd[1]: Started sshd@8-46.62.237.128:22-112.78.1.94:36226.service - OpenSSH per-connection server daemon (112.78.1.94:36226). Jan 24 00:40:22.530730 sshd[4134]: Invalid user elsearch from 112.78.1.94 port 36226 Jan 24 00:40:22.778310 sshd[4134]: Received disconnect from 112.78.1.94 port 36226:11: Bye Bye [preauth] Jan 24 00:40:22.778310 sshd[4134]: Disconnected from invalid user elsearch 112.78.1.94 port 36226 [preauth] Jan 24 00:40:22.783662 systemd[1]: sshd@8-46.62.237.128:22-112.78.1.94:36226.service: Deactivated successfully. Jan 24 00:40:52.989612 systemd[1]: Started sshd@9-46.62.237.128:22-20.161.92.111:41860.service - OpenSSH per-connection server daemon (20.161.92.111:41860). Jan 24 00:40:53.764726 sshd[4144]: Accepted publickey for core from 20.161.92.111 port 41860 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:40:53.767706 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:40:53.779132 systemd-logind[1621]: New session 8 of user core. Jan 24 00:40:53.788362 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:40:54.430045 sshd[4144]: pam_unix(sshd:session): session closed for user core Jan 24 00:40:54.437479 systemd[1]: sshd@9-46.62.237.128:22-20.161.92.111:41860.service: Deactivated successfully. Jan 24 00:40:54.441740 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:40:54.443117 systemd-logind[1621]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:40:54.444246 systemd-logind[1621]: Removed session 8. Jan 24 00:40:59.560323 systemd[1]: Started sshd@10-46.62.237.128:22-20.161.92.111:41868.service - OpenSSH per-connection server daemon (20.161.92.111:41868). Jan 24 00:41:00.338729 sshd[4164]: Accepted publickey for core from 20.161.92.111 port 41868 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:00.341051 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:00.347910 systemd-logind[1621]: New session 9 of user core. Jan 24 00:41:00.353911 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:41:00.975407 sshd[4164]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:00.985423 systemd[1]: sshd@10-46.62.237.128:22-20.161.92.111:41868.service: Deactivated successfully. Jan 24 00:41:00.992198 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:41:00.994296 systemd-logind[1621]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:41:00.996773 systemd-logind[1621]: Removed session 9. Jan 24 00:41:06.105640 systemd[1]: Started sshd@11-46.62.237.128:22-20.161.92.111:49954.service - OpenSSH per-connection server daemon (20.161.92.111:49954). Jan 24 00:41:06.881401 sshd[4179]: Accepted publickey for core from 20.161.92.111 port 49954 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:06.884513 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:06.893521 systemd-logind[1621]: New session 10 of user core. Jan 24 00:41:06.901402 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:41:07.540382 sshd[4179]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:07.546546 systemd[1]: sshd@11-46.62.237.128:22-20.161.92.111:49954.service: Deactivated successfully. Jan 24 00:41:07.555695 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:41:07.557680 systemd-logind[1621]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:41:07.559634 systemd-logind[1621]: Removed session 10. Jan 24 00:41:07.670480 systemd[1]: Started sshd@12-46.62.237.128:22-20.161.92.111:49964.service - OpenSSH per-connection server daemon (20.161.92.111:49964). Jan 24 00:41:08.453282 sshd[4195]: Accepted publickey for core from 20.161.92.111 port 49964 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:08.456221 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:08.467254 systemd-logind[1621]: New session 11 of user core. Jan 24 00:41:08.475559 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:41:09.164133 sshd[4195]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:09.171703 systemd[1]: sshd@12-46.62.237.128:22-20.161.92.111:49964.service: Deactivated successfully. Jan 24 00:41:09.182098 systemd-logind[1621]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:41:09.183400 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:41:09.186933 systemd-logind[1621]: Removed session 11. Jan 24 00:41:09.293526 systemd[1]: Started sshd@13-46.62.237.128:22-20.161.92.111:49980.service - OpenSSH per-connection server daemon (20.161.92.111:49980). Jan 24 00:41:10.074630 sshd[4207]: Accepted publickey for core from 20.161.92.111 port 49980 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:10.077293 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:10.084936 systemd-logind[1621]: New session 12 of user core. Jan 24 00:41:10.094458 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:41:10.703556 sshd[4207]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:10.708319 systemd[1]: sshd@13-46.62.237.128:22-20.161.92.111:49980.service: Deactivated successfully. Jan 24 00:41:10.716900 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:41:10.719426 systemd-logind[1621]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:41:10.721863 systemd-logind[1621]: Removed session 12. Jan 24 00:41:11.258727 systemd[1]: Started sshd@14-46.62.237.128:22-112.78.1.94:53506.service - OpenSSH per-connection server daemon (112.78.1.94:53506). Jan 24 00:41:12.577029 sshd[4221]: Invalid user openhab from 112.78.1.94 port 53506 Jan 24 00:41:12.823418 sshd[4221]: Received disconnect from 112.78.1.94 port 53506:11: Bye Bye [preauth] Jan 24 00:41:12.823418 sshd[4221]: Disconnected from invalid user openhab 112.78.1.94 port 53506 [preauth] Jan 24 00:41:12.827635 systemd[1]: sshd@14-46.62.237.128:22-112.78.1.94:53506.service: Deactivated successfully. Jan 24 00:41:15.835225 systemd[1]: Started sshd@15-46.62.237.128:22-20.161.92.111:38318.service - OpenSSH per-connection server daemon (20.161.92.111:38318). Jan 24 00:41:16.610483 sshd[4226]: Accepted publickey for core from 20.161.92.111 port 38318 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:16.613375 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:16.622556 systemd-logind[1621]: New session 13 of user core. Jan 24 00:41:16.630338 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:41:17.217806 sshd[4226]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:17.221207 systemd[1]: sshd@15-46.62.237.128:22-20.161.92.111:38318.service: Deactivated successfully. Jan 24 00:41:17.226904 systemd-logind[1621]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:41:17.227980 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:41:17.230294 systemd-logind[1621]: Removed session 13. Jan 24 00:41:22.352402 systemd[1]: Started sshd@16-46.62.237.128:22-20.161.92.111:38320.service - OpenSSH per-connection server daemon (20.161.92.111:38320). Jan 24 00:41:23.109645 sshd[4242]: Accepted publickey for core from 20.161.92.111 port 38320 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:23.112423 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:23.120448 systemd-logind[1621]: New session 14 of user core. Jan 24 00:41:23.127464 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:41:23.744882 sshd[4242]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:23.751134 systemd-logind[1621]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:41:23.751291 systemd[1]: sshd@16-46.62.237.128:22-20.161.92.111:38320.service: Deactivated successfully. Jan 24 00:41:23.754773 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:41:23.756626 systemd-logind[1621]: Removed session 14. Jan 24 00:41:23.873777 systemd[1]: Started sshd@17-46.62.237.128:22-20.161.92.111:51012.service - OpenSSH per-connection server daemon (20.161.92.111:51012). Jan 24 00:41:24.653258 sshd[4256]: Accepted publickey for core from 20.161.92.111 port 51012 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:24.656025 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:24.665325 systemd-logind[1621]: New session 15 of user core. Jan 24 00:41:24.676329 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:41:25.334742 sshd[4256]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:25.341286 systemd[1]: sshd@17-46.62.237.128:22-20.161.92.111:51012.service: Deactivated successfully. Jan 24 00:41:25.342124 systemd-logind[1621]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:41:25.345101 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:41:25.346211 systemd-logind[1621]: Removed session 15. Jan 24 00:41:25.470297 systemd[1]: Started sshd@18-46.62.237.128:22-20.161.92.111:51018.service - OpenSSH per-connection server daemon (20.161.92.111:51018). Jan 24 00:41:26.237676 sshd[4268]: Accepted publickey for core from 20.161.92.111 port 51018 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:26.240631 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:26.249115 systemd-logind[1621]: New session 16 of user core. Jan 24 00:41:26.256299 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:41:27.663006 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:27.670452 systemd[1]: sshd@18-46.62.237.128:22-20.161.92.111:51018.service: Deactivated successfully. Jan 24 00:41:27.680985 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:41:27.683248 systemd-logind[1621]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:41:27.685422 systemd-logind[1621]: Removed session 16. Jan 24 00:41:27.794531 systemd[1]: Started sshd@19-46.62.237.128:22-20.161.92.111:51030.service - OpenSSH per-connection server daemon (20.161.92.111:51030). Jan 24 00:41:28.576058 sshd[4290]: Accepted publickey for core from 20.161.92.111 port 51030 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:28.579038 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:28.587061 systemd-logind[1621]: New session 17 of user core. Jan 24 00:41:28.594487 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:41:29.381201 sshd[4290]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:29.390580 systemd[1]: sshd@19-46.62.237.128:22-20.161.92.111:51030.service: Deactivated successfully. Jan 24 00:41:29.397357 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:41:29.399348 systemd-logind[1621]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:41:29.401364 systemd-logind[1621]: Removed session 17. Jan 24 00:41:29.511415 systemd[1]: Started sshd@20-46.62.237.128:22-20.161.92.111:51034.service - OpenSSH per-connection server daemon (20.161.92.111:51034). Jan 24 00:41:30.296539 sshd[4301]: Accepted publickey for core from 20.161.92.111 port 51034 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:30.299067 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:30.309517 systemd-logind[1621]: New session 18 of user core. Jan 24 00:41:30.315805 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:41:30.927983 sshd[4301]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:30.934199 systemd[1]: sshd@20-46.62.237.128:22-20.161.92.111:51034.service: Deactivated successfully. Jan 24 00:41:30.942618 systemd-logind[1621]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:41:30.944550 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:41:30.947200 systemd-logind[1621]: Removed session 18. Jan 24 00:41:36.058720 systemd[1]: Started sshd@21-46.62.237.128:22-20.161.92.111:43612.service - OpenSSH per-connection server daemon (20.161.92.111:43612). Jan 24 00:41:36.832289 sshd[4316]: Accepted publickey for core from 20.161.92.111 port 43612 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:36.835480 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:36.844923 systemd-logind[1621]: New session 19 of user core. Jan 24 00:41:36.851313 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:41:37.482965 sshd[4316]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:37.491904 systemd[1]: sshd@21-46.62.237.128:22-20.161.92.111:43612.service: Deactivated successfully. Jan 24 00:41:37.498259 systemd-logind[1621]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:41:37.499212 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:41:37.502127 systemd-logind[1621]: Removed session 19. Jan 24 00:41:42.610181 systemd[1]: Started sshd@22-46.62.237.128:22-20.161.92.111:49488.service - OpenSSH per-connection server daemon (20.161.92.111:49488). Jan 24 00:41:43.364108 sshd[4330]: Accepted publickey for core from 20.161.92.111 port 49488 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:43.367108 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.376452 systemd-logind[1621]: New session 20 of user core. Jan 24 00:41:43.379609 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:41:44.006240 sshd[4330]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:44.011671 systemd[1]: sshd@22-46.62.237.128:22-20.161.92.111:49488.service: Deactivated successfully. Jan 24 00:41:44.019762 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:41:44.022414 systemd-logind[1621]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:41:44.023972 systemd-logind[1621]: Removed session 20. Jan 24 00:41:44.134438 systemd[1]: Started sshd@23-46.62.237.128:22-20.161.92.111:49504.service - OpenSSH per-connection server daemon (20.161.92.111:49504). Jan 24 00:41:44.911388 sshd[4344]: Accepted publickey for core from 20.161.92.111 port 49504 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:44.914695 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:44.923731 systemd-logind[1621]: New session 21 of user core. Jan 24 00:41:44.929559 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:41:46.687883 containerd[1646]: time="2026-01-24T00:41:46.686746994Z" level=info msg="StopContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" with timeout 30 (s)" Jan 24 00:41:46.694042 containerd[1646]: time="2026-01-24T00:41:46.692893407Z" level=info msg="Stop container \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" with signal terminated" Jan 24 00:41:46.715283 systemd[1]: run-containerd-runc-k8s.io-17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac-runc.icyFWN.mount: Deactivated successfully. Jan 24 00:41:46.729242 containerd[1646]: time="2026-01-24T00:41:46.729168387Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:41:46.742327 containerd[1646]: time="2026-01-24T00:41:46.742234788Z" level=info msg="StopContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" with timeout 2 (s)" Jan 24 00:41:46.745231 containerd[1646]: time="2026-01-24T00:41:46.745025658Z" level=info msg="Stop container \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" with signal terminated" Jan 24 00:41:46.776866 systemd-networkd[1265]: lxc_health: Link DOWN Jan 24 00:41:46.776892 systemd-networkd[1265]: lxc_health: Lost carrier Jan 24 00:41:46.831302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a-rootfs.mount: Deactivated successfully. Jan 24 00:41:46.854149 containerd[1646]: time="2026-01-24T00:41:46.854079825Z" level=info msg="shim disconnected" id=17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac namespace=k8s.io Jan 24 00:41:46.854149 containerd[1646]: time="2026-01-24T00:41:46.854140146Z" level=warning msg="cleaning up after shim disconnected" id=17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac namespace=k8s.io Jan 24 00:41:46.854149 containerd[1646]: time="2026-01-24T00:41:46.854155006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:46.854979 containerd[1646]: time="2026-01-24T00:41:46.854782750Z" level=info msg="shim disconnected" id=e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a namespace=k8s.io Jan 24 00:41:46.854979 containerd[1646]: time="2026-01-24T00:41:46.854954093Z" level=warning msg="cleaning up after shim disconnected" id=e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a namespace=k8s.io Jan 24 00:41:46.854979 containerd[1646]: time="2026-01-24T00:41:46.854968394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:46.855555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac-rootfs.mount: Deactivated successfully. Jan 24 00:41:46.880020 containerd[1646]: time="2026-01-24T00:41:46.879966342Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:41:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:41:46.881586 containerd[1646]: time="2026-01-24T00:41:46.881463774Z" level=info msg="StopContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" returns successfully" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882285962Z" level=info msg="StopPodSandbox for \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\"" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882308692Z" level=info msg="Container to stop \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882320903Z" level=info msg="Container to stop \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882331913Z" level=info msg="Container to stop \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882340293Z" level=info msg="Container to stop \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.882430 containerd[1646]: time="2026-01-24T00:41:46.882346883Z" level=info msg="Container to stop \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.884503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4-shm.mount: Deactivated successfully. Jan 24 00:41:46.887007 containerd[1646]: time="2026-01-24T00:41:46.886991023Z" level=info msg="StopContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" returns successfully" Jan 24 00:41:46.887382 containerd[1646]: time="2026-01-24T00:41:46.887369261Z" level=info msg="StopPodSandbox for \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\"" Jan 24 00:41:46.887532 containerd[1646]: time="2026-01-24T00:41:46.887469683Z" level=info msg="Container to stop \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:41:46.928853 containerd[1646]: time="2026-01-24T00:41:46.928657029Z" level=info msg="shim disconnected" id=e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4 namespace=k8s.io Jan 24 00:41:46.928853 containerd[1646]: time="2026-01-24T00:41:46.928701960Z" level=warning msg="cleaning up after shim disconnected" id=e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4 namespace=k8s.io Jan 24 00:41:46.928853 containerd[1646]: time="2026-01-24T00:41:46.928708840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:46.941969 containerd[1646]: time="2026-01-24T00:41:46.941614668Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:41:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:41:46.943487 containerd[1646]: time="2026-01-24T00:41:46.943108430Z" level=info msg="TearDown network for sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" successfully" Jan 24 00:41:46.943487 containerd[1646]: time="2026-01-24T00:41:46.943127831Z" level=info msg="StopPodSandbox for \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" returns successfully" Jan 24 00:41:46.946376 containerd[1646]: time="2026-01-24T00:41:46.946336769Z" level=info msg="shim disconnected" id=4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584 namespace=k8s.io Jan 24 00:41:46.946376 containerd[1646]: time="2026-01-24T00:41:46.946368360Z" level=warning msg="cleaning up after shim disconnected" id=4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584 namespace=k8s.io Jan 24 00:41:46.946376 containerd[1646]: time="2026-01-24T00:41:46.946375590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:46.961350 containerd[1646]: time="2026-01-24T00:41:46.961307771Z" level=info msg="TearDown network for sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" successfully" Jan 24 00:41:46.961350 containerd[1646]: time="2026-01-24T00:41:46.961336892Z" level=info msg="StopPodSandbox for \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" returns successfully" Jan 24 00:41:47.067240 kubelet[2740]: I0124 00:41:47.067180 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c739d745-779f-46be-b64f-acedcdaa54ca-cilium-config-path\") pod \"c739d745-779f-46be-b64f-acedcdaa54ca\" (UID: \"c739d745-779f-46be-b64f-acedcdaa54ca\") " Jan 24 00:41:47.067240 kubelet[2740]: I0124 00:41:47.067243 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-lib-modules\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.067240 kubelet[2740]: I0124 00:41:47.067269 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-xtables-lock\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067296 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-etc-cni-netd\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067320 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-bpf-maps\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067341 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-run\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067381 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-cgroup\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067424 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-net\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.068553 kubelet[2740]: I0124 00:41:47.067456 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-config-path\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067479 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cni-path\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067502 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-hostproc\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067523 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-kernel\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067548 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-hubble-tls\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067574 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n77hl\" (UniqueName: \"kubernetes.io/projected/c739d745-779f-46be-b64f-acedcdaa54ca-kube-api-access-n77hl\") pod \"c739d745-779f-46be-b64f-acedcdaa54ca\" (UID: \"c739d745-779f-46be-b64f-acedcdaa54ca\") " Jan 24 00:41:47.069610 kubelet[2740]: I0124 00:41:47.067598 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qpg2\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.070104 kubelet[2740]: I0124 00:41:47.067623 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16c9cb02-964e-466a-8c70-67e5f0795cb9-clustermesh-secrets\") pod \"16c9cb02-964e-466a-8c70-67e5f0795cb9\" (UID: \"16c9cb02-964e-466a-8c70-67e5f0795cb9\") " Jan 24 00:41:47.070104 kubelet[2740]: I0124 00:41:47.069148 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070104 kubelet[2740]: I0124 00:41:47.069217 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070104 kubelet[2740]: I0124 00:41:47.069256 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070104 kubelet[2740]: I0124 00:41:47.069294 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070439 kubelet[2740]: I0124 00:41:47.069327 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070439 kubelet[2740]: I0124 00:41:47.069365 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.070439 kubelet[2740]: I0124 00:41:47.069400 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.071186 kubelet[2740]: I0124 00:41:47.071028 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cni-path" (OuterVolumeSpecName: "cni-path") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.071186 kubelet[2740]: I0124 00:41:47.071132 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-hostproc" (OuterVolumeSpecName: "hostproc") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.071791 kubelet[2740]: I0124 00:41:47.071424 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:41:47.082763 kubelet[2740]: I0124 00:41:47.082710 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c739d745-779f-46be-b64f-acedcdaa54ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c739d745-779f-46be-b64f-acedcdaa54ca" (UID: "c739d745-779f-46be-b64f-acedcdaa54ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:41:47.083989 kubelet[2740]: I0124 00:41:47.082876 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c9cb02-964e-466a-8c70-67e5f0795cb9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:41:47.086344 kubelet[2740]: I0124 00:41:47.086305 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:41:47.087505 kubelet[2740]: I0124 00:41:47.087387 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:41:47.088088 kubelet[2740]: I0124 00:41:47.087812 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c739d745-779f-46be-b64f-acedcdaa54ca-kube-api-access-n77hl" (OuterVolumeSpecName: "kube-api-access-n77hl") pod "c739d745-779f-46be-b64f-acedcdaa54ca" (UID: "c739d745-779f-46be-b64f-acedcdaa54ca"). InnerVolumeSpecName "kube-api-access-n77hl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:41:47.089196 kubelet[2740]: I0124 00:41:47.089163 2740 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2" (OuterVolumeSpecName: "kube-api-access-6qpg2") pod "16c9cb02-964e-466a-8c70-67e5f0795cb9" (UID: "16c9cb02-964e-466a-8c70-67e5f0795cb9"). InnerVolumeSpecName "kube-api-access-6qpg2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:41:47.168192 kubelet[2740]: I0124 00:41:47.168109 2740 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-hubble-tls\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168192 kubelet[2740]: I0124 00:41:47.168147 2740 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n77hl\" (UniqueName: \"kubernetes.io/projected/c739d745-779f-46be-b64f-acedcdaa54ca-kube-api-access-n77hl\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168192 kubelet[2740]: I0124 00:41:47.168167 2740 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6qpg2\" (UniqueName: \"kubernetes.io/projected/16c9cb02-964e-466a-8c70-67e5f0795cb9-kube-api-access-6qpg2\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168192 kubelet[2740]: I0124 00:41:47.168184 2740 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16c9cb02-964e-466a-8c70-67e5f0795cb9-clustermesh-secrets\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168192 kubelet[2740]: I0124 00:41:47.168201 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c739d745-779f-46be-b64f-acedcdaa54ca-cilium-config-path\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168216 2740 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-lib-modules\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168230 2740 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-xtables-lock\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168246 2740 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-etc-cni-netd\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168260 2740 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-bpf-maps\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168273 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-run\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168288 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-cgroup\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168303 2740 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-net\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.168589 kubelet[2740]: I0124 00:41:47.168317 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16c9cb02-964e-466a-8c70-67e5f0795cb9-cilium-config-path\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.169545 kubelet[2740]: I0124 00:41:47.168332 2740 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-cni-path\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.169545 kubelet[2740]: I0124 00:41:47.168346 2740 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-hostproc\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.169545 kubelet[2740]: I0124 00:41:47.168360 2740 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16c9cb02-964e-466a-8c70-67e5f0795cb9-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-3213f37a88\" DevicePath \"\"" Jan 24 00:41:47.701230 kubelet[2740]: I0124 00:41:47.701182 2740 scope.go:117] "RemoveContainer" containerID="17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac" Jan 24 00:41:47.707780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4-rootfs.mount: Deactivated successfully. Jan 24 00:41:47.708153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584-rootfs.mount: Deactivated successfully. Jan 24 00:41:47.709619 containerd[1646]: time="2026-01-24T00:41:47.709188418Z" level=info msg="RemoveContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\"" Jan 24 00:41:47.708420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584-shm.mount: Deactivated successfully. Jan 24 00:41:47.708701 systemd[1]: var-lib-kubelet-pods-c739d745\x2d779f\x2d46be\x2db64f\x2dacedcdaa54ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn77hl.mount: Deactivated successfully. Jan 24 00:41:47.713091 systemd[1]: var-lib-kubelet-pods-16c9cb02\x2d964e\x2d466a\x2d8c70\x2d67e5f0795cb9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6qpg2.mount: Deactivated successfully. Jan 24 00:41:47.713611 systemd[1]: var-lib-kubelet-pods-16c9cb02\x2d964e\x2d466a\x2d8c70\x2d67e5f0795cb9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 00:41:47.713914 systemd[1]: var-lib-kubelet-pods-16c9cb02\x2d964e\x2d466a\x2d8c70\x2d67e5f0795cb9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 00:41:47.724050 containerd[1646]: time="2026-01-24T00:41:47.723969581Z" level=info msg="RemoveContainer for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" returns successfully" Jan 24 00:41:47.724866 kubelet[2740]: I0124 00:41:47.724738 2740 scope.go:117] "RemoveContainer" containerID="5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d" Jan 24 00:41:47.728979 containerd[1646]: time="2026-01-24T00:41:47.728789446Z" level=info msg="RemoveContainer for \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\"" Jan 24 00:41:47.733765 containerd[1646]: time="2026-01-24T00:41:47.733627132Z" level=info msg="RemoveContainer for \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\" returns successfully" Jan 24 00:41:47.734126 kubelet[2740]: I0124 00:41:47.734061 2740 scope.go:117] "RemoveContainer" containerID="5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa" Jan 24 00:41:47.735288 containerd[1646]: time="2026-01-24T00:41:47.735258507Z" level=info msg="RemoveContainer for \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\"" Jan 24 00:41:47.739954 containerd[1646]: time="2026-01-24T00:41:47.739931070Z" level=info msg="RemoveContainer for \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\" returns successfully" Jan 24 00:41:47.740060 kubelet[2740]: I0124 00:41:47.740042 2740 scope.go:117] "RemoveContainer" containerID="db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc" Jan 24 00:41:47.740950 containerd[1646]: time="2026-01-24T00:41:47.740928891Z" level=info msg="RemoveContainer for \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\"" Jan 24 00:41:47.743528 containerd[1646]: time="2026-01-24T00:41:47.743505847Z" level=info msg="RemoveContainer for \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\" returns successfully" Jan 24 00:41:47.743610 kubelet[2740]: I0124 00:41:47.743593 2740 scope.go:117] "RemoveContainer" containerID="bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979" Jan 24 00:41:47.744225 containerd[1646]: time="2026-01-24T00:41:47.744207223Z" level=info msg="RemoveContainer for \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\"" Jan 24 00:41:47.746934 containerd[1646]: time="2026-01-24T00:41:47.746906692Z" level=info msg="RemoveContainer for \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\" returns successfully" Jan 24 00:41:47.747035 kubelet[2740]: I0124 00:41:47.747012 2740 scope.go:117] "RemoveContainer" containerID="17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac" Jan 24 00:41:47.747189 containerd[1646]: time="2026-01-24T00:41:47.747157557Z" level=error msg="ContainerStatus for \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\": not found" Jan 24 00:41:47.747318 kubelet[2740]: E0124 00:41:47.747296 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\": not found" containerID="17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac" Jan 24 00:41:47.747441 kubelet[2740]: I0124 00:41:47.747344 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac"} err="failed to get container status \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"17639e20d31c4f723a44bfd7ef181493583c48a8b8d4670ba21571c71df899ac\": not found" Jan 24 00:41:47.747441 kubelet[2740]: I0124 00:41:47.747417 2740 scope.go:117] "RemoveContainer" containerID="5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d" Jan 24 00:41:47.747582 containerd[1646]: time="2026-01-24T00:41:47.747555926Z" level=error msg="ContainerStatus for \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\": not found" Jan 24 00:41:47.747660 kubelet[2740]: E0124 00:41:47.747634 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\": not found" containerID="5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d" Jan 24 00:41:47.747724 kubelet[2740]: I0124 00:41:47.747655 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d"} err="failed to get container status \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b71fed09052f043fca8793029cd8edf0a3be5d53e489d35b7f527557e39d79d\": not found" Jan 24 00:41:47.747724 kubelet[2740]: I0124 00:41:47.747673 2740 scope.go:117] "RemoveContainer" containerID="5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa" Jan 24 00:41:47.747852 containerd[1646]: time="2026-01-24T00:41:47.747782301Z" level=error msg="ContainerStatus for \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\": not found" Jan 24 00:41:47.747927 kubelet[2740]: E0124 00:41:47.747895 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\": not found" containerID="5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa" Jan 24 00:41:47.747978 kubelet[2740]: I0124 00:41:47.747961 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa"} err="failed to get container status \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\": rpc error: code = NotFound desc = an error occurred when try to find container \"5adc769ca847f920564bf1753f61d699c83a883f1937f0f93deb209aea71dbfa\": not found" Jan 24 00:41:47.748015 kubelet[2740]: I0124 00:41:47.747978 2740 scope.go:117] "RemoveContainer" containerID="db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc" Jan 24 00:41:47.748117 containerd[1646]: time="2026-01-24T00:41:47.748087527Z" level=error msg="ContainerStatus for \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\": not found" Jan 24 00:41:47.748207 kubelet[2740]: E0124 00:41:47.748182 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\": not found" containerID="db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc" Jan 24 00:41:47.748253 kubelet[2740]: I0124 00:41:47.748203 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc"} err="failed to get container status \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"db24691d0e883094990029c4dc4784ce6d3b612bcb8eaf1e3bd31482a271e3cc\": not found" Jan 24 00:41:47.748253 kubelet[2740]: I0124 00:41:47.748217 2740 scope.go:117] "RemoveContainer" containerID="bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979" Jan 24 00:41:47.748367 containerd[1646]: time="2026-01-24T00:41:47.748339023Z" level=error msg="ContainerStatus for \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\": not found" Jan 24 00:41:47.748433 kubelet[2740]: E0124 00:41:47.748414 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\": not found" containerID="bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979" Jan 24 00:41:47.748466 kubelet[2740]: I0124 00:41:47.748435 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979"} err="failed to get container status \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf76f0cfc1564ebbde9d66a806b81cb8f76a77e816aefbda7927d118d5b21979\": not found" Jan 24 00:41:47.748466 kubelet[2740]: I0124 00:41:47.748447 2740 scope.go:117] "RemoveContainer" containerID="e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a" Jan 24 00:41:47.749154 containerd[1646]: time="2026-01-24T00:41:47.749122310Z" level=info msg="RemoveContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\"" Jan 24 00:41:47.751550 containerd[1646]: time="2026-01-24T00:41:47.751516653Z" level=info msg="RemoveContainer for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" returns successfully" Jan 24 00:41:47.752323 kubelet[2740]: I0124 00:41:47.751619 2740 scope.go:117] "RemoveContainer" containerID="e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a" Jan 24 00:41:47.752980 containerd[1646]: time="2026-01-24T00:41:47.752936173Z" level=error msg="ContainerStatus for \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\": not found" Jan 24 00:41:47.753786 kubelet[2740]: E0124 00:41:47.753763 2740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\": not found" containerID="e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a" Jan 24 00:41:47.753860 kubelet[2740]: I0124 00:41:47.753784 2740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a"} err="failed to get container status \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7e574e22e486144c1da2030e7881425aca843881ae0ce53cbb5a964f70dc06a\": not found" Jan 24 00:41:48.268866 kubelet[2740]: I0124 00:41:48.268767 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c9cb02-964e-466a-8c70-67e5f0795cb9" path="/var/lib/kubelet/pods/16c9cb02-964e-466a-8c70-67e5f0795cb9/volumes" Jan 24 00:41:48.270422 kubelet[2740]: I0124 00:41:48.270381 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c739d745-779f-46be-b64f-acedcdaa54ca" path="/var/lib/kubelet/pods/c739d745-779f-46be-b64f-acedcdaa54ca/volumes" Jan 24 00:41:48.712365 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:48.715861 systemd[1]: sshd@23-46.62.237.128:22-20.161.92.111:49504.service: Deactivated successfully. Jan 24 00:41:48.718314 systemd-logind[1621]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:41:48.719368 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:41:48.721346 systemd-logind[1621]: Removed session 21. Jan 24 00:41:48.843446 systemd[1]: Started sshd@24-46.62.237.128:22-20.161.92.111:49512.service - OpenSSH per-connection server daemon (20.161.92.111:49512). Jan 24 00:41:49.615301 sshd[4515]: Accepted publickey for core from 20.161.92.111 port 49512 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:49.618350 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:49.626886 systemd-logind[1621]: New session 22 of user core. Jan 24 00:41:49.632431 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:41:50.370357 kubelet[2740]: E0124 00:41:50.369548 2740 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:41:50.594313 kubelet[2740]: I0124 00:41:50.592368 2740 memory_manager.go:355] "RemoveStaleState removing state" podUID="16c9cb02-964e-466a-8c70-67e5f0795cb9" containerName="cilium-agent" Jan 24 00:41:50.594313 kubelet[2740]: I0124 00:41:50.592408 2740 memory_manager.go:355] "RemoveStaleState removing state" podUID="c739d745-779f-46be-b64f-acedcdaa54ca" containerName="cilium-operator" Jan 24 00:41:50.638659 sshd[4515]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:50.651003 systemd[1]: sshd@24-46.62.237.128:22-20.161.92.111:49512.service: Deactivated successfully. Jan 24 00:41:50.659588 systemd-logind[1621]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:41:50.661312 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:41:50.667031 systemd-logind[1621]: Removed session 22. Jan 24 00:41:50.691259 kubelet[2740]: I0124 00:41:50.691219 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-xtables-lock\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691390 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-cilium-config-path\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691411 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-cilium-run\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691423 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-hubble-tls\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691434 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-cilium-cgroup\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691446 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-cilium-ipsec-secrets\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691499 kubelet[2740]: I0124 00:41:50.691458 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-hostproc\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691621 kubelet[2740]: I0124 00:41:50.691553 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-cni-path\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691663 kubelet[2740]: I0124 00:41:50.691629 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vg8\" (UniqueName: \"kubernetes.io/projected/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-kube-api-access-82vg8\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691706 kubelet[2740]: I0124 00:41:50.691689 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-etc-cni-netd\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691726 kubelet[2740]: I0124 00:41:50.691714 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-clustermesh-secrets\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691772 kubelet[2740]: I0124 00:41:50.691739 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-host-proc-sys-net\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691789 kubelet[2740]: I0124 00:41:50.691767 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-lib-modules\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691809 kubelet[2740]: I0124 00:41:50.691790 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-host-proc-sys-kernel\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.691883 kubelet[2740]: I0124 00:41:50.691813 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506-bpf-maps\") pod \"cilium-ffhw6\" (UID: \"58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506\") " pod="kube-system/cilium-ffhw6" Jan 24 00:41:50.770143 systemd[1]: Started sshd@25-46.62.237.128:22-20.161.92.111:49526.service - OpenSSH per-connection server daemon (20.161.92.111:49526). Jan 24 00:41:50.905758 containerd[1646]: time="2026-01-24T00:41:50.904644098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffhw6,Uid:58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506,Namespace:kube-system,Attempt:0,}" Jan 24 00:41:50.950448 containerd[1646]: time="2026-01-24T00:41:50.950353406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:41:50.950767 containerd[1646]: time="2026-01-24T00:41:50.950612032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:41:50.950767 containerd[1646]: time="2026-01-24T00:41:50.950628813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:41:50.950767 containerd[1646]: time="2026-01-24T00:41:50.950723395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:41:51.006943 containerd[1646]: time="2026-01-24T00:41:51.006879792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffhw6,Uid:58b4ede2-1cd2-47f8-b91f-c0b7fd2c2506,Namespace:kube-system,Attempt:0,} returns sandbox id \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\"" Jan 24 00:41:51.010581 containerd[1646]: time="2026-01-24T00:41:51.010535987Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:41:51.021626 containerd[1646]: time="2026-01-24T00:41:51.021570350Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea2cb8361b892c40a30889f42231c642dc1a81945e7a189728fb58c62c8bb230\"" Jan 24 00:41:51.022183 containerd[1646]: time="2026-01-24T00:41:51.022147674Z" level=info msg="StartContainer for \"ea2cb8361b892c40a30889f42231c642dc1a81945e7a189728fb58c62c8bb230\"" Jan 24 00:41:51.110671 containerd[1646]: time="2026-01-24T00:41:51.109789140Z" level=info msg="StartContainer for \"ea2cb8361b892c40a30889f42231c642dc1a81945e7a189728fb58c62c8bb230\" returns successfully" Jan 24 00:41:51.170900 containerd[1646]: time="2026-01-24T00:41:51.170477706Z" level=info msg="shim disconnected" id=ea2cb8361b892c40a30889f42231c642dc1a81945e7a189728fb58c62c8bb230 namespace=k8s.io Jan 24 00:41:51.170900 containerd[1646]: time="2026-01-24T00:41:51.170560169Z" level=warning msg="cleaning up after shim disconnected" id=ea2cb8361b892c40a30889f42231c642dc1a81945e7a189728fb58c62c8bb230 namespace=k8s.io Jan 24 00:41:51.170900 containerd[1646]: time="2026-01-24T00:41:51.170573629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:51.187055 containerd[1646]: time="2026-01-24T00:41:51.186970166Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:41:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:41:51.531070 sshd[4528]: Accepted publickey for core from 20.161.92.111 port 49526 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:51.534510 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:51.543536 systemd-logind[1621]: New session 23 of user core. Jan 24 00:41:51.557363 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:41:51.733769 containerd[1646]: time="2026-01-24T00:41:51.733216355Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:41:51.769987 containerd[1646]: time="2026-01-24T00:41:51.769901098Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059\"" Jan 24 00:41:51.771045 containerd[1646]: time="2026-01-24T00:41:51.770939603Z" level=info msg="StartContainer for \"de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059\"" Jan 24 00:41:51.872134 containerd[1646]: time="2026-01-24T00:41:51.871400103Z" level=info msg="StartContainer for \"de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059\" returns successfully" Jan 24 00:41:51.903697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059-rootfs.mount: Deactivated successfully. Jan 24 00:41:51.912655 containerd[1646]: time="2026-01-24T00:41:51.912613392Z" level=info msg="shim disconnected" id=de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059 namespace=k8s.io Jan 24 00:41:51.913198 containerd[1646]: time="2026-01-24T00:41:51.913069023Z" level=warning msg="cleaning up after shim disconnected" id=de648e8348349e65add617ea9bfb0aac481f9c60a0d700aeb27b0580bddfe059 namespace=k8s.io Jan 24 00:41:51.913198 containerd[1646]: time="2026-01-24T00:41:51.913082793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:52.063242 sshd[4528]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:52.070112 systemd[1]: sshd@25-46.62.237.128:22-20.161.92.111:49526.service: Deactivated successfully. Jan 24 00:41:52.082055 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:41:52.085017 systemd-logind[1621]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:41:52.088820 systemd-logind[1621]: Removed session 23. Jan 24 00:41:52.195583 systemd[1]: Started sshd@26-46.62.237.128:22-20.161.92.111:49530.service - OpenSSH per-connection server daemon (20.161.92.111:49530). Jan 24 00:41:52.732309 containerd[1646]: time="2026-01-24T00:41:52.732222908Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:41:52.764753 containerd[1646]: time="2026-01-24T00:41:52.764289224Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b\"" Jan 24 00:41:52.772924 containerd[1646]: time="2026-01-24T00:41:52.770711633Z" level=info msg="StartContainer for \"8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b\"" Jan 24 00:41:52.770760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064902644.mount: Deactivated successfully. Jan 24 00:41:52.872843 containerd[1646]: time="2026-01-24T00:41:52.872784799Z" level=info msg="StartContainer for \"8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b\" returns successfully" Jan 24 00:41:52.892106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b-rootfs.mount: Deactivated successfully. Jan 24 00:41:52.897943 containerd[1646]: time="2026-01-24T00:41:52.897887464Z" level=info msg="shim disconnected" id=8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b namespace=k8s.io Jan 24 00:41:52.897943 containerd[1646]: time="2026-01-24T00:41:52.897930165Z" level=warning msg="cleaning up after shim disconnected" id=8b84e246fe1ac2890f1bba1e7e07c844b5ffe9f67b3078eaa5b9229021248a2b namespace=k8s.io Jan 24 00:41:52.897943 containerd[1646]: time="2026-01-24T00:41:52.897937185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:52.908255 containerd[1646]: time="2026-01-24T00:41:52.908218884Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:41:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:41:52.983971 sshd[4707]: Accepted publickey for core from 20.161.92.111 port 49530 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:41:52.986073 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:52.991331 systemd-logind[1621]: New session 24 of user core. Jan 24 00:41:53.000380 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:41:53.326245 kubelet[2740]: I0124 00:41:53.325725 2740 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-3213f37a88" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:41:53Z","lastTransitionTime":"2026-01-24T00:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 00:41:53.737861 containerd[1646]: time="2026-01-24T00:41:53.737756058Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:41:53.766090 containerd[1646]: time="2026-01-24T00:41:53.763687419Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac\"" Jan 24 00:41:53.766090 containerd[1646]: time="2026-01-24T00:41:53.765101511Z" level=info msg="StartContainer for \"b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac\"" Jan 24 00:41:53.882811 containerd[1646]: time="2026-01-24T00:41:53.882595027Z" level=info msg="StartContainer for \"b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac\" returns successfully" Jan 24 00:41:53.903502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac-rootfs.mount: Deactivated successfully. Jan 24 00:41:53.912592 containerd[1646]: time="2026-01-24T00:41:53.912363478Z" level=info msg="shim disconnected" id=b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac namespace=k8s.io Jan 24 00:41:53.912592 containerd[1646]: time="2026-01-24T00:41:53.912575894Z" level=warning msg="cleaning up after shim disconnected" id=b5f9b220ac7168c041d4488bfb7de0c0ad0ee567ee356de13d34ac937a7322ac namespace=k8s.io Jan 24 00:41:53.912958 containerd[1646]: time="2026-01-24T00:41:53.912593914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:41:54.745998 containerd[1646]: time="2026-01-24T00:41:54.745429569Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:41:54.779143 containerd[1646]: time="2026-01-24T00:41:54.773270252Z" level=info msg="CreateContainer within sandbox \"192470b5aec3df5000aa6f2526e55f40aa777b82806f653609d9bb595bb32b95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4f937c7be1fbf0c9064bc09c9bebb57d1a0b5c6557e0b0e61e2402cd5b75280\"" Jan 24 00:41:54.786604 containerd[1646]: time="2026-01-24T00:41:54.786540559Z" level=info msg="StartContainer for \"c4f937c7be1fbf0c9064bc09c9bebb57d1a0b5c6557e0b0e61e2402cd5b75280\"" Jan 24 00:41:54.917223 containerd[1646]: time="2026-01-24T00:41:54.917072004Z" level=info msg="StartContainer for \"c4f937c7be1fbf0c9064bc09c9bebb57d1a0b5c6557e0b0e61e2402cd5b75280\" returns successfully" Jan 24 00:41:55.308987 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 00:41:58.345122 systemd-networkd[1265]: lxc_health: Link UP Jan 24 00:41:58.353576 systemd-networkd[1265]: lxc_health: Gained carrier Jan 24 00:41:58.949474 kubelet[2740]: I0124 00:41:58.948152 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ffhw6" podStartSLOduration=8.948121942 podStartE2EDuration="8.948121942s" podCreationTimestamp="2026-01-24 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:41:55.772159471 +0000 UTC m=+155.627529988" watchObservedRunningTime="2026-01-24 00:41:58.948121942 +0000 UTC m=+158.803492459" Jan 24 00:41:59.795412 systemd-networkd[1265]: lxc_health: Gained IPv6LL Jan 24 00:41:59.829305 systemd[1]: run-containerd-runc-k8s.io-c4f937c7be1fbf0c9064bc09c9bebb57d1a0b5c6557e0b0e61e2402cd5b75280-runc.Vk0hyg.mount: Deactivated successfully. Jan 24 00:41:59.870583 kubelet[2740]: E0124 00:41:59.869470 2740 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42460->127.0.0.1:35991: write tcp 127.0.0.1:42460->127.0.0.1:35991: write: broken pipe Jan 24 00:42:01.667401 systemd[1]: Started sshd@27-46.62.237.128:22-112.78.1.94:40966.service - OpenSSH per-connection server daemon (112.78.1.94:40966). Jan 24 00:42:02.978158 sshd[5464]: Invalid user tsminst1 from 112.78.1.94 port 40966 Jan 24 00:42:03.221628 sshd[5464]: Received disconnect from 112.78.1.94 port 40966:11: Bye Bye [preauth] Jan 24 00:42:03.221628 sshd[5464]: Disconnected from invalid user tsminst1 112.78.1.94 port 40966 [preauth] Jan 24 00:42:03.224105 systemd[1]: sshd@27-46.62.237.128:22-112.78.1.94:40966.service: Deactivated successfully. Jan 24 00:42:04.410297 sshd[4707]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:04.417249 systemd[1]: sshd@26-46.62.237.128:22-20.161.92.111:49530.service: Deactivated successfully. Jan 24 00:42:04.427283 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:42:04.429271 systemd-logind[1621]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:42:04.430903 systemd-logind[1621]: Removed session 24. Jan 24 00:42:20.286548 containerd[1646]: time="2026-01-24T00:42:20.286463082Z" level=info msg="StopPodSandbox for \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\"" Jan 24 00:42:20.287279 containerd[1646]: time="2026-01-24T00:42:20.286602136Z" level=info msg="TearDown network for sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" successfully" Jan 24 00:42:20.287279 containerd[1646]: time="2026-01-24T00:42:20.286621966Z" level=info msg="StopPodSandbox for \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" returns successfully" Jan 24 00:42:20.287515 containerd[1646]: time="2026-01-24T00:42:20.287358257Z" level=info msg="RemovePodSandbox for \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\"" Jan 24 00:42:20.287515 containerd[1646]: time="2026-01-24T00:42:20.287423559Z" level=info msg="Forcibly stopping sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\"" Jan 24 00:42:20.287597 containerd[1646]: time="2026-01-24T00:42:20.287540842Z" level=info msg="TearDown network for sandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" successfully" Jan 24 00:42:20.293257 containerd[1646]: time="2026-01-24T00:42:20.293178441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:42:20.293257 containerd[1646]: time="2026-01-24T00:42:20.293244193Z" level=info msg="RemovePodSandbox \"e627725c595d1a0b7f3e2d285ebb767dfdf21cc32526b4360c5a359004f710e4\" returns successfully" Jan 24 00:42:20.293734 containerd[1646]: time="2026-01-24T00:42:20.293681656Z" level=info msg="StopPodSandbox for \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\"" Jan 24 00:42:20.293835 containerd[1646]: time="2026-01-24T00:42:20.293795829Z" level=info msg="TearDown network for sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" successfully" Jan 24 00:42:20.293905 containerd[1646]: time="2026-01-24T00:42:20.293818529Z" level=info msg="StopPodSandbox for \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" returns successfully" Jan 24 00:42:20.294411 containerd[1646]: time="2026-01-24T00:42:20.294361405Z" level=info msg="RemovePodSandbox for \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\"" Jan 24 00:42:20.294411 containerd[1646]: time="2026-01-24T00:42:20.294403926Z" level=info msg="Forcibly stopping sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\"" Jan 24 00:42:20.294577 containerd[1646]: time="2026-01-24T00:42:20.294496288Z" level=info msg="TearDown network for sandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" successfully" Jan 24 00:42:20.299465 containerd[1646]: time="2026-01-24T00:42:20.299387046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:42:20.299465 containerd[1646]: time="2026-01-24T00:42:20.299450518Z" level=info msg="RemovePodSandbox \"4e8e4f5d474454c3681bb27464ea584a6063d56df490c347fb32b7c89acee584\" returns successfully" Jan 24 00:42:21.729586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e-rootfs.mount: Deactivated successfully. Jan 24 00:42:21.748769 containerd[1646]: time="2026-01-24T00:42:21.748399505Z" level=info msg="shim disconnected" id=afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e namespace=k8s.io Jan 24 00:42:21.748769 containerd[1646]: time="2026-01-24T00:42:21.748489358Z" level=warning msg="cleaning up after shim disconnected" id=afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e namespace=k8s.io Jan 24 00:42:21.748769 containerd[1646]: time="2026-01-24T00:42:21.748513259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:21.808398 kubelet[2740]: I0124 00:42:21.807790 2740 scope.go:117] "RemoveContainer" containerID="afbc8cc00a36b9448022c0a6dc97a20c22253ac7d3ad3bbb4aab66c26061af2e" Jan 24 00:42:21.810619 containerd[1646]: time="2026-01-24T00:42:21.810579396Z" level=info msg="CreateContainer within sandbox \"af0884dbcaac70dac6943dd49fe15ea6ee3c8c8f922c7382c72c39f4c78f9d94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:42:21.826237 containerd[1646]: time="2026-01-24T00:42:21.826136638Z" level=info msg="CreateContainer within sandbox \"af0884dbcaac70dac6943dd49fe15ea6ee3c8c8f922c7382c72c39f4c78f9d94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"afe1eadb2039b54fc997d3e89dfde35e5483c94d67b1b0a4aaca2df42f765943\"" Jan 24 00:42:21.827363 containerd[1646]: time="2026-01-24T00:42:21.826963491Z" level=info msg="StartContainer for \"afe1eadb2039b54fc997d3e89dfde35e5483c94d67b1b0a4aaca2df42f765943\"" Jan 24 00:42:21.880932 kubelet[2740]: E0124 00:42:21.877103 2740 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43516->10.0.0.2:2379: read: connection timed out" Jan 24 00:42:21.968574 containerd[1646]: time="2026-01-24T00:42:21.968517681Z" level=info msg="StartContainer for \"afe1eadb2039b54fc997d3e89dfde35e5483c94d67b1b0a4aaca2df42f765943\" returns successfully" Jan 24 00:42:22.259606 kubelet[2740]: I0124 00:42:22.259533 2740 status_manager.go:890] "Failed to get status for pod" podUID="7b8cfa8ea2b4f5d953b28bd53184242f" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3213f37a88" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43446->10.0.0.2:2379: read: connection timed out" Jan 24 00:42:22.725642 systemd[1]: run-containerd-runc-k8s.io-afe1eadb2039b54fc997d3e89dfde35e5483c94d67b1b0a4aaca2df42f765943-runc.ZZwYr6.mount: Deactivated successfully. Jan 24 00:42:26.611516 kubelet[2740]: E0124 00:42:26.611286 2740 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43340->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-3213f37a88.188d83f97b80d7a3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-3213f37a88,UID:9f0eae8e66a55252431479081818f6b7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-3213f37a88,},FirstTimestamp:2026-01-24 00:42:16.168609699 +0000 UTC m=+176.023980216,LastTimestamp:2026-01-24 00:42:16.168609699 +0000 UTC m=+176.023980216,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-3213f37a88,}" Jan 24 00:42:27.452326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d-rootfs.mount: Deactivated successfully. Jan 24 00:42:27.461059 containerd[1646]: time="2026-01-24T00:42:27.460977112Z" level=info msg="shim disconnected" id=ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d namespace=k8s.io Jan 24 00:42:27.461059 containerd[1646]: time="2026-01-24T00:42:27.461057378Z" level=warning msg="cleaning up after shim disconnected" id=ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d namespace=k8s.io Jan 24 00:42:27.462053 containerd[1646]: time="2026-01-24T00:42:27.461073237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:42:27.824376 kubelet[2740]: I0124 00:42:27.824321 2740 scope.go:117] "RemoveContainer" containerID="ffe98ae799126e68ba14713a0e8879c27beba1f31fe835466a3584c1f1020b1d" Jan 24 00:42:27.826635 containerd[1646]: time="2026-01-24T00:42:27.826574969Z" level=info msg="CreateContainer within sandbox \"8015e92497f076e672c9c96d18d13b9a70d16642f774d4e54668258ec8f5462d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:42:27.846094 containerd[1646]: time="2026-01-24T00:42:27.846036757Z" level=info msg="CreateContainer within sandbox \"8015e92497f076e672c9c96d18d13b9a70d16642f774d4e54668258ec8f5462d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"90658471e1390da3c49c712ee50a2fb7b3b33c03b8f66c78f8c12f7427268d7b\"" Jan 24 00:42:27.846608 containerd[1646]: time="2026-01-24T00:42:27.846550119Z" level=info msg="StartContainer for \"90658471e1390da3c49c712ee50a2fb7b3b33c03b8f66c78f8c12f7427268d7b\"" Jan 24 00:42:27.986668 containerd[1646]: time="2026-01-24T00:42:27.986559806Z" level=info msg="StartContainer for \"90658471e1390da3c49c712ee50a2fb7b3b33c03b8f66c78f8c12f7427268d7b\" returns successfully" Jan 24 00:42:28.450813 systemd[1]: run-containerd-runc-k8s.io-90658471e1390da3c49c712ee50a2fb7b3b33c03b8f66c78f8c12f7427268d7b-runc.R9x6TA.mount: Deactivated successfully. Jan 24 00:42:31.878162 kubelet[2740]: E0124 00:42:31.877905 2740 controller.go:195] "Failed to update lease" err="Put \"https://46.62.237.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3213f37a88?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:42:41.879757 kubelet[2740]: E0124 00:42:41.879462 2740 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-n-3213f37a88)"