Dec 16 13:02:04.855308 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:02:04.855337 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:04.855352 kernel: BIOS-provided physical RAM map: Dec 16 13:02:04.855361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 13:02:04.855368 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 13:02:04.855376 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:02:04.855385 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 13:02:04.855393 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 13:02:04.855400 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 13:02:04.855410 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 13:02:04.855418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:02:04.855425 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:02:04.855433 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:02:04.855441 kernel: NX (Execute Disable) protection: active Dec 16 13:02:04.855451 kernel: APIC: Static calls initialized Dec 16 13:02:04.855461 kernel: SMBIOS 3.0.0 present. Dec 16 13:02:04.855470 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 13:02:04.855478 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:02:04.855486 kernel: Hypervisor detected: KVM Dec 16 13:02:04.855495 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 13:02:04.855503 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:02:04.855511 kernel: kvm-clock: using sched offset of 4381218263 cycles Dec 16 13:02:04.855519 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:02:04.855529 kernel: tsc: Detected 2445.404 MHz processor Dec 16 13:02:04.855537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:02:04.855548 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:02:04.855557 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 13:02:04.857615 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:02:04.857636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:02:04.857646 kernel: Using GB pages for direct mapping Dec 16 13:02:04.857656 kernel: ACPI: Early table checksum verification disabled Dec 16 13:02:04.857664 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Dec 16 13:02:04.857674 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857682 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857696 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857705 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 13:02:04.857713 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857722 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857731 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857767 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:04.857780 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Dec 16 13:02:04.857791 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Dec 16 13:02:04.857800 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 13:02:04.857810 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Dec 16 13:02:04.857819 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Dec 16 13:02:04.857827 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Dec 16 13:02:04.857837 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Dec 16 13:02:04.857845 kernel: No NUMA configuration found Dec 16 13:02:04.857856 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 13:02:04.857865 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Dec 16 13:02:04.857875 kernel: Zone ranges: Dec 16 13:02:04.857884 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:02:04.857893 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 13:02:04.857902 kernel: Normal empty Dec 16 13:02:04.857911 kernel: Device empty Dec 16 13:02:04.857919 kernel: Movable zone start for each node Dec 16 13:02:04.857928 kernel: Early memory node ranges Dec 16 13:02:04.857937 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:02:04.857948 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 13:02:04.857957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 13:02:04.857966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:02:04.857992 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:02:04.858001 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 13:02:04.858010 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:02:04.858019 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:02:04.858028 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:02:04.858038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:02:04.858049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:02:04.858058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:02:04.858066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:02:04.858076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:02:04.858085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:02:04.858093 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:02:04.858103 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:02:04.858112 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:02:04.858120 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:02:04.858131 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:02:04.858140 kernel: CPU topo: Num. cores per package: 2 Dec 16 13:02:04.858149 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:02:04.858158 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:02:04.858166 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:02:04.858175 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 13:02:04.858184 kernel: Booting paravirtualized kernel on KVM Dec 16 13:02:04.858193 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:02:04.858203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:02:04.858212 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:02:04.858223 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:02:04.858232 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:02:04.858241 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 13:02:04.858252 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:04.858261 kernel: random: crng init done Dec 16 13:02:04.858270 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:02:04.858279 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:02:04.858288 kernel: Fallback order for Node 0: 0 Dec 16 13:02:04.858299 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Dec 16 13:02:04.858308 kernel: Policy zone: DMA32 Dec 16 13:02:04.858317 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:02:04.858326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:02:04.858335 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:02:04.858344 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:02:04.858353 kernel: Dynamic Preempt: voluntary Dec 16 13:02:04.858362 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:02:04.858372 kernel: rcu: RCU event tracing is enabled. Dec 16 13:02:04.858384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:02:04.858393 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:02:04.858402 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:02:04.858411 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:02:04.858420 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:02:04.858429 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:02:04.858438 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:02:04.858447 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:02:04.858456 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:02:04.858467 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:02:04.858476 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:02:04.858485 kernel: Console: colour VGA+ 80x25 Dec 16 13:02:04.858493 kernel: printk: legacy console [tty0] enabled Dec 16 13:02:04.858503 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:02:04.858512 kernel: ACPI: Core revision 20240827 Dec 16 13:02:04.858527 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:02:04.858539 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:02:04.858549 kernel: x2apic enabled Dec 16 13:02:04.858558 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:02:04.858594 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:02:04.858604 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fc319723, max_idle_ns: 440795258057 ns Dec 16 13:02:04.858617 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Dec 16 13:02:04.858627 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:02:04.858636 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:02:04.858646 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:02:04.858655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:02:04.858667 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:02:04.858677 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:02:04.858686 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:02:04.858696 kernel: active return thunk: retbleed_return_thunk Dec 16 13:02:04.858719 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:02:04.858752 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:02:04.858762 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:02:04.858771 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:02:04.858781 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:02:04.858799 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:02:04.858809 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:02:04.858818 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:02:04.858828 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:02:04.858837 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:02:04.858847 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:02:04.858856 kernel: landlock: Up and running. Dec 16 13:02:04.858865 kernel: SELinux: Initializing. Dec 16 13:02:04.858875 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:02:04.858886 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:02:04.858896 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:02:04.858905 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:02:04.858915 kernel: ... version: 0 Dec 16 13:02:04.858925 kernel: ... bit width: 48 Dec 16 13:02:04.858934 kernel: ... generic registers: 6 Dec 16 13:02:04.858943 kernel: ... value mask: 0000ffffffffffff Dec 16 13:02:04.858953 kernel: ... max period: 00007fffffffffff Dec 16 13:02:04.858962 kernel: ... fixed-purpose events: 0 Dec 16 13:02:04.858989 kernel: ... event mask: 000000000000003f Dec 16 13:02:04.858998 kernel: signal: max sigframe size: 1776 Dec 16 13:02:04.859008 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:02:04.859017 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:02:04.859027 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:02:04.859037 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:02:04.859046 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:02:04.859056 kernel: .... node #0, CPUs: #1 Dec 16 13:02:04.859065 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:02:04.859077 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Dec 16 13:02:04.859087 kernel: Memory: 1909588K/2047464K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133332K reserved, 0K cma-reserved) Dec 16 13:02:04.859097 kernel: devtmpfs: initialized Dec 16 13:02:04.859106 kernel: x86/mm: Memory block size: 128MB Dec 16 13:02:04.859116 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:02:04.859125 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:02:04.859135 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:02:04.859144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:02:04.859154 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:02:04.859165 kernel: audit: type=2000 audit(1765890121.884:1): state=initialized audit_enabled=0 res=1 Dec 16 13:02:04.859174 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:02:04.859184 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:02:04.859193 kernel: cpuidle: using governor menu Dec 16 13:02:04.859202 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:02:04.859212 kernel: dca service started, version 1.12.1 Dec 16 13:02:04.859221 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 13:02:04.859231 kernel: PCI: Using configuration type 1 for base access Dec 16 13:02:04.859241 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:02:04.859252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:02:04.859261 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:02:04.859271 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:02:04.859281 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:02:04.859290 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:02:04.859299 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:02:04.859309 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:02:04.859318 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:02:04.859327 kernel: ACPI: Interpreter enabled Dec 16 13:02:04.859338 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:02:04.859348 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:02:04.859357 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:02:04.859367 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:02:04.859376 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:02:04.859386 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:02:04.861699 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:02:04.861863 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:02:04.862002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:02:04.862020 kernel: PCI host bridge to bus 0000:00 Dec 16 13:02:04.862132 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:02:04.862265 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:02:04.862377 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:02:04.862517 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 13:02:04.862677 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:02:04.862773 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 13:02:04.862854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:02:04.862990 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:02:04.863131 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:02:04.863275 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Dec 16 13:02:04.863376 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 13:02:04.863536 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Dec 16 13:02:04.866950 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Dec 16 13:02:04.867099 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:02:04.867250 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.867356 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Dec 16 13:02:04.867450 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 13:02:04.867543 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 13:02:04.867677 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 13:02:04.867790 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.867886 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Dec 16 13:02:04.868010 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 13:02:04.868112 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 13:02:04.868239 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 13:02:04.868348 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.868451 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Dec 16 13:02:04.868543 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 13:02:04.870729 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 13:02:04.870843 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 13:02:04.870954 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.871075 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Dec 16 13:02:04.871170 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 13:02:04.871271 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 13:02:04.871363 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 13:02:04.871523 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.871664 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Dec 16 13:02:04.871762 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 13:02:04.871855 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 13:02:04.871946 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 13:02:04.872081 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.872177 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Dec 16 13:02:04.872268 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 13:02:04.872360 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 13:02:04.872450 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 13:02:04.872549 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.875110 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Dec 16 13:02:04.875312 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 13:02:04.875427 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 13:02:04.875525 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 13:02:04.875659 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.875761 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Dec 16 13:02:04.875854 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 13:02:04.875956 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 13:02:04.876087 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 13:02:04.876193 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:02:04.876533 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Dec 16 13:02:04.879718 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 13:02:04.879822 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 13:02:04.879918 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 13:02:04.880053 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:02:04.880138 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:02:04.880207 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:02:04.880270 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Dec 16 13:02:04.880328 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Dec 16 13:02:04.880393 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:02:04.880452 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 13:02:04.880525 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 13:02:04.880608 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Dec 16 13:02:04.880673 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 13:02:04.880735 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Dec 16 13:02:04.880794 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 13:02:04.880862 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 13:02:04.880924 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Dec 16 13:02:04.881004 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 13:02:04.881077 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Dec 16 13:02:04.881140 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Dec 16 13:02:04.881202 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 13:02:04.881261 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 13:02:04.881329 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 13:02:04.881397 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 13:02:04.881459 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 13:02:04.881528 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 13:02:04.883637 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Dec 16 13:02:04.883748 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 13:02:04.883819 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 13:02:04.883897 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Dec 16 13:02:04.883994 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Dec 16 13:02:04.884066 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 13:02:04.884132 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 13:02:04.884142 kernel: acpiphp: Slot [0] registered Dec 16 13:02:04.884213 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 13:02:04.884278 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Dec 16 13:02:04.884341 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 13:02:04.884410 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 13:02:04.884474 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 13:02:04.884484 kernel: acpiphp: Slot [0-2] registered Dec 16 13:02:04.884545 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 13:02:04.884555 kernel: acpiphp: Slot [0-3] registered Dec 16 13:02:04.884747 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 13:02:04.884762 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:02:04.884774 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:02:04.884780 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:02:04.884787 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:02:04.884793 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:02:04.884799 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:02:04.884806 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:02:04.884812 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:02:04.884819 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:02:04.884825 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:02:04.884834 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:02:04.884840 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:02:04.884846 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:02:04.884853 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:02:04.884859 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:02:04.884865 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:02:04.884872 kernel: iommu: Default domain type: Translated Dec 16 13:02:04.884878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:02:04.884885 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:02:04.884893 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:02:04.884900 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 13:02:04.884906 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 13:02:04.884995 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:02:04.885062 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:02:04.885122 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:02:04.885131 kernel: vgaarb: loaded Dec 16 13:02:04.885138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:02:04.885144 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:02:04.885154 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:02:04.885160 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:02:04.885167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:02:04.885174 kernel: pnp: PnP ACPI init Dec 16 13:02:04.885247 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 13:02:04.885259 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:02:04.885265 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:02:04.885272 kernel: NET: Registered PF_INET protocol family Dec 16 13:02:04.885280 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:02:04.885287 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:02:04.885293 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:02:04.885300 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:02:04.885306 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:02:04.885312 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:02:04.885319 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:02:04.885325 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:02:04.885332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:02:04.885339 kernel: NET: Registered PF_XDP protocol family Dec 16 13:02:04.885404 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 13:02:04.885470 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 13:02:04.885535 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 13:02:04.885619 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Dec 16 13:02:04.885684 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Dec 16 13:02:04.885762 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Dec 16 13:02:04.885830 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 13:02:04.885895 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 13:02:04.885955 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 13:02:04.886038 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 13:02:04.886102 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 13:02:04.886161 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 13:02:04.886224 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 13:02:04.886285 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 13:02:04.886345 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 13:02:04.886408 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 13:02:04.886469 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 13:02:04.886535 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 13:02:04.886666 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 13:02:04.886732 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 13:02:04.886792 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 13:02:04.886853 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 13:02:04.886914 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 13:02:04.887016 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 13:02:04.887081 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 13:02:04.887150 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 13:02:04.887213 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 13:02:04.887279 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 13:02:04.887340 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 13:02:04.887404 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 13:02:04.887495 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 13:02:04.887559 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 13:02:04.887644 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 13:02:04.887705 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 13:02:04.887765 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 13:02:04.887825 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 13:02:04.887886 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:02:04.887941 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:02:04.888020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:02:04.888078 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 13:02:04.888131 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 13:02:04.888188 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 13:02:04.888255 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 13:02:04.888353 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 13:02:04.888470 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 13:02:04.888596 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 13:02:04.888669 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 13:02:04.888733 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 13:02:04.888836 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 13:02:04.888906 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 13:02:04.888985 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 13:02:04.889050 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 13:02:04.889114 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 13:02:04.889170 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 13:02:04.889231 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 13:02:04.889285 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 13:02:04.889338 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 13:02:04.889403 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 13:02:04.889458 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 13:02:04.889512 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 13:02:04.889629 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 13:02:04.889691 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 13:02:04.889746 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 13:02:04.889756 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:02:04.889766 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:02:04.889774 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fc319723, max_idle_ns: 440795258057 ns Dec 16 13:02:04.889781 kernel: Initialise system trusted keyrings Dec 16 13:02:04.889788 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:02:04.889794 kernel: Key type asymmetric registered Dec 16 13:02:04.889801 kernel: Asymmetric key parser 'x509' registered Dec 16 13:02:04.889807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:02:04.889815 kernel: io scheduler mq-deadline registered Dec 16 13:02:04.889821 kernel: io scheduler kyber registered Dec 16 13:02:04.889829 kernel: io scheduler bfq registered Dec 16 13:02:04.889891 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 13:02:04.889953 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 13:02:04.890032 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 13:02:04.890094 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 13:02:04.890155 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 13:02:04.890215 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 13:02:04.890274 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 13:02:04.890333 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 13:02:04.890397 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 13:02:04.890456 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 13:02:04.890515 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 13:02:04.890598 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 13:02:04.890665 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 13:02:04.890725 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 13:02:04.890785 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 13:02:04.890849 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 13:02:04.890859 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:02:04.890915 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 13:02:04.890998 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 13:02:04.891009 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:02:04.891016 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 13:02:04.891026 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:02:04.891033 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:02:04.891040 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:02:04.891047 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:02:04.891053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:02:04.891060 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:02:04.891132 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:02:04.891189 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:02:04.891247 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:02:04 UTC (1765890124) Dec 16 13:02:04.891302 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 13:02:04.891310 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:02:04.891318 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:02:04.891325 kernel: Segment Routing with IPv6 Dec 16 13:02:04.891331 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:02:04.891338 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:02:04.891344 kernel: Key type dns_resolver registered Dec 16 13:02:04.891351 kernel: IPI shorthand broadcast: enabled Dec 16 13:02:04.891360 kernel: sched_clock: Marking stable (3130011565, 155691524)->(3300637449, -14934360) Dec 16 13:02:04.891366 kernel: registered taskstats version 1 Dec 16 13:02:04.891373 kernel: Loading compiled-in X.509 certificates Dec 16 13:02:04.891379 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:02:04.891386 kernel: Demotion targets for Node 0: null Dec 16 13:02:04.891392 kernel: Key type .fscrypt registered Dec 16 13:02:04.891399 kernel: Key type fscrypt-provisioning registered Dec 16 13:02:04.891405 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:02:04.891412 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:02:04.891420 kernel: ima: No architecture policies found Dec 16 13:02:04.891426 kernel: clk: Disabling unused clocks Dec 16 13:02:04.891433 kernel: Warning: unable to open an initial console. Dec 16 13:02:04.891440 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:02:04.891447 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:02:04.891453 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:02:04.891460 kernel: Run /init as init process Dec 16 13:02:04.891466 kernel: with arguments: Dec 16 13:02:04.891473 kernel: /init Dec 16 13:02:04.891481 kernel: with environment: Dec 16 13:02:04.891487 kernel: HOME=/ Dec 16 13:02:04.891493 kernel: TERM=linux Dec 16 13:02:04.891501 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:02:04.891511 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:02:04.891519 systemd[1]: Detected virtualization kvm. Dec 16 13:02:04.891526 systemd[1]: Detected architecture x86-64. Dec 16 13:02:04.891533 systemd[1]: Running in initrd. Dec 16 13:02:04.891542 systemd[1]: No hostname configured, using default hostname. Dec 16 13:02:04.891549 systemd[1]: Hostname set to . Dec 16 13:02:04.891556 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:02:04.891590 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:02:04.891598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:02:04.891606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:02:04.891613 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:02:04.891620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:02:04.891629 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:02:04.891637 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:02:04.891645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:02:04.891653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:02:04.891660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:02:04.891667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:02:04.891674 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:02:04.891683 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:02:04.891690 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:02:04.891696 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:02:04.891703 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:02:04.891711 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:02:04.891718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:02:04.891725 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:02:04.891732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:02:04.891740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:02:04.891748 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:02:04.891754 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:02:04.891761 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:02:04.891769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:02:04.891776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:02:04.891783 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:02:04.891790 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:02:04.891797 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:02:04.891805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:02:04.891812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:04.891819 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:02:04.891826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:02:04.891836 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:02:04.891865 systemd-journald[197]: Collecting audit messages is disabled. Dec 16 13:02:04.891886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:02:04.891893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:02:04.891903 kernel: Bridge firewalling registered Dec 16 13:02:04.891910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:02:04.891918 systemd-journald[197]: Journal started Dec 16 13:02:04.891935 systemd-journald[197]: Runtime Journal (/run/log/journal/13aafeee2e5c4496a1dd6bebb0c72f8c) is 4.7M, max 38.3M, 33.5M free. Dec 16 13:02:04.857807 systemd-modules-load[198]: Inserted module 'overlay' Dec 16 13:02:04.926648 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:02:04.883126 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 16 13:02:04.927405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:04.928472 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:02:04.931699 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:02:04.933710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:02:04.936803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:02:04.950803 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:02:04.965895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:02:04.968375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:02:04.969842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:02:04.970367 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:02:04.972788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:02:04.975071 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:02:04.977126 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:02:04.989083 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:05.009603 systemd-resolved[236]: Positive Trust Anchors: Dec 16 13:02:05.010289 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:02:05.010316 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:02:05.015341 systemd-resolved[236]: Defaulting to hostname 'linux'. Dec 16 13:02:05.016107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:02:05.016858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:02:05.061643 kernel: SCSI subsystem initialized Dec 16 13:02:05.069595 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:02:05.079609 kernel: iscsi: registered transport (tcp) Dec 16 13:02:05.096624 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:02:05.096694 kernel: QLogic iSCSI HBA Driver Dec 16 13:02:05.112730 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:02:05.130672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:02:05.131558 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:02:05.171342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:02:05.173150 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:02:05.236615 kernel: raid6: avx2x4 gen() 25757 MB/s Dec 16 13:02:05.253599 kernel: raid6: avx2x2 gen() 29039 MB/s Dec 16 13:02:05.271813 kernel: raid6: avx2x1 gen() 20680 MB/s Dec 16 13:02:05.271867 kernel: raid6: using algorithm avx2x2 gen() 29039 MB/s Dec 16 13:02:05.290634 kernel: raid6: .... xor() 30512 MB/s, rmw enabled Dec 16 13:02:05.290693 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:02:05.309613 kernel: xor: automatically using best checksumming function avx Dec 16 13:02:05.443615 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:02:05.450718 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:02:05.453333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:02:05.487964 systemd-udevd[445]: Using default interface naming scheme 'v255'. Dec 16 13:02:05.495434 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:02:05.497645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:02:05.529874 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Dec 16 13:02:05.556795 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:02:05.559921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:02:05.615783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:02:05.618986 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:02:05.690419 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:02:05.690471 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Dec 16 13:02:05.734615 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:02:05.738606 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 13:02:05.755625 kernel: AES CTR mode by8 optimization enabled Dec 16 13:02:05.768796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:05.768932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:05.770880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:05.772162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:05.780365 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 13:02:05.782970 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 13:02:05.783125 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 13:02:05.783633 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 13:02:05.783748 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:02:05.796016 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:02:05.796060 kernel: GPT:17805311 != 80003071 Dec 16 13:02:05.796070 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:02:05.796865 kernel: GPT:17805311 != 80003071 Dec 16 13:02:05.797951 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:02:05.799722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:02:05.802651 kernel: ACPI: bus type USB registered Dec 16 13:02:05.802680 kernel: usbcore: registered new interface driver usbfs Dec 16 13:02:05.802689 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 13:02:05.804390 kernel: libata version 3.00 loaded. Dec 16 13:02:05.805513 kernel: usbcore: registered new interface driver hub Dec 16 13:02:05.812594 kernel: usbcore: registered new device driver usb Dec 16 13:02:05.834595 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:02:05.851893 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:02:05.852956 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:02:05.852996 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:02:05.853112 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:02:05.853195 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:02:05.854597 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 13:02:05.854716 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 13:02:05.855604 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 13:02:05.858587 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 13:02:05.858702 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 13:02:05.858791 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 13:02:05.858891 kernel: scsi host1: ahci Dec 16 13:02:05.858981 kernel: scsi host2: ahci Dec 16 13:02:05.863735 kernel: hub 1-0:1.0: USB hub found Dec 16 13:02:05.863882 kernel: hub 1-0:1.0: 4 ports detected Dec 16 13:02:05.866585 kernel: scsi host3: ahci Dec 16 13:02:05.866700 kernel: scsi host4: ahci Dec 16 13:02:05.866780 kernel: scsi host5: ahci Dec 16 13:02:05.868594 kernel: scsi host6: ahci Dec 16 13:02:05.868704 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 lpm-pol 1 Dec 16 13:02:05.868715 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 lpm-pol 1 Dec 16 13:02:05.868723 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 lpm-pol 1 Dec 16 13:02:05.868731 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 lpm-pol 1 Dec 16 13:02:05.868742 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 lpm-pol 1 Dec 16 13:02:05.868750 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 lpm-pol 1 Dec 16 13:02:05.871596 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 13:02:05.879768 kernel: hub 2-0:1.0: USB hub found Dec 16 13:02:05.880006 kernel: hub 2-0:1.0: 4 ports detected Dec 16 13:02:05.889907 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 13:02:05.932687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:05.957963 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 13:02:05.964513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 13:02:05.965111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 13:02:05.973527 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:02:05.975510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:02:05.998765 disk-uuid[608]: Primary Header is updated. Dec 16 13:02:05.998765 disk-uuid[608]: Secondary Entries is updated. Dec 16 13:02:05.998765 disk-uuid[608]: Secondary Header is updated. Dec 16 13:02:06.012596 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:02:06.026602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:02:06.109598 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 13:02:06.184943 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:06.184999 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:06.185018 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:06.185028 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:06.185037 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:02:06.185584 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:06.186595 kernel: ata1.00: LPM support broken, forcing max_power Dec 16 13:02:06.189160 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:02:06.189182 kernel: ata1.00: applying bridge limits Dec 16 13:02:06.190824 kernel: ata1.00: LPM support broken, forcing max_power Dec 16 13:02:06.192933 kernel: ata1.00: configured for UDMA/100 Dec 16 13:02:06.193604 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:02:06.236329 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:02:06.236523 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:02:06.244587 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:02:06.250596 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:02:06.254766 kernel: usbcore: registered new interface driver usbhid Dec 16 13:02:06.254816 kernel: usbhid: USB HID core driver Dec 16 13:02:06.261105 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 16 13:02:06.261141 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 13:02:06.559313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:02:06.560658 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:02:06.561904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:02:06.563303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:02:06.565377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:02:06.602639 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:02:07.035622 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:02:07.035737 disk-uuid[609]: The operation has completed successfully. Dec 16 13:02:07.121944 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:02:07.122115 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:02:07.172493 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:02:07.199245 sh[644]: Success Dec 16 13:02:07.229118 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:02:07.229196 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:02:07.235609 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:02:07.248689 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:02:07.293967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:02:07.297652 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:02:07.306009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:02:07.321604 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (656) Dec 16 13:02:07.321649 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:02:07.324928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:07.333810 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:02:07.333847 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:02:07.336417 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:02:07.338470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:02:07.340203 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:02:07.341790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:02:07.343727 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:02:07.345707 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:02:07.372593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (691) Dec 16 13:02:07.378041 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:07.378095 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:07.383791 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:02:07.383836 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:02:07.386445 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:02:07.393612 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:07.394920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:02:07.397727 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:02:07.463119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:02:07.471693 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:02:07.508227 ignition[752]: Ignition 2.22.0 Dec 16 13:02:07.508703 ignition[752]: Stage: fetch-offline Dec 16 13:02:07.508736 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:07.511170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:02:07.508743 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:07.508810 ignition[752]: parsed url from cmdline: "" Dec 16 13:02:07.508813 ignition[752]: no config URL provided Dec 16 13:02:07.508817 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:02:07.508823 ignition[752]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:02:07.514168 systemd-networkd[825]: lo: Link UP Dec 16 13:02:07.508827 ignition[752]: failed to fetch config: resource requires networking Dec 16 13:02:07.514172 systemd-networkd[825]: lo: Gained carrier Dec 16 13:02:07.509258 ignition[752]: Ignition finished successfully Dec 16 13:02:07.515866 systemd-networkd[825]: Enumeration completed Dec 16 13:02:07.516005 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:02:07.516286 systemd-networkd[825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:07.516289 systemd-networkd[825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:02:07.517306 systemd-networkd[825]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:07.517309 systemd-networkd[825]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:02:07.518114 systemd[1]: Reached target network.target - Network. Dec 16 13:02:07.518260 systemd-networkd[825]: eth0: Link UP Dec 16 13:02:07.518426 systemd-networkd[825]: eth1: Link UP Dec 16 13:02:07.518579 systemd-networkd[825]: eth0: Gained carrier Dec 16 13:02:07.518587 systemd-networkd[825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:07.521676 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:02:07.523924 systemd-networkd[825]: eth1: Gained carrier Dec 16 13:02:07.523934 systemd-networkd[825]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:07.546491 ignition[834]: Ignition 2.22.0 Dec 16 13:02:07.546504 ignition[834]: Stage: fetch Dec 16 13:02:07.546648 ignition[834]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:07.546657 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:07.546748 ignition[834]: parsed url from cmdline: "" Dec 16 13:02:07.546751 ignition[834]: no config URL provided Dec 16 13:02:07.546755 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:02:07.546760 ignition[834]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:02:07.546782 ignition[834]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 13:02:07.546896 ignition[834]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:02:07.555621 systemd-networkd[825]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 13:02:07.589640 systemd-networkd[825]: eth0: DHCPv4 address 77.42.28.57/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 13:02:07.747308 ignition[834]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 16 13:02:07.750732 ignition[834]: GET result: OK Dec 16 13:02:07.750798 ignition[834]: parsing config with SHA512: fa708c50b7bb008e37ff908abcae557acf0b19654042bd2fae3378338a897cb17e1781975febe58a13fe4c69d30e936362c4ad68177d0615079cd1494e65987d Dec 16 13:02:07.757144 unknown[834]: fetched base config from "system" Dec 16 13:02:07.757162 unknown[834]: fetched base config from "system" Dec 16 13:02:07.757630 ignition[834]: fetch: fetch complete Dec 16 13:02:07.757167 unknown[834]: fetched user config from "hetzner" Dec 16 13:02:07.757639 ignition[834]: fetch: fetch passed Dec 16 13:02:07.757687 ignition[834]: Ignition finished successfully Dec 16 13:02:07.759932 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:02:07.761646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:02:07.788330 ignition[842]: Ignition 2.22.0 Dec 16 13:02:07.788346 ignition[842]: Stage: kargs Dec 16 13:02:07.788471 ignition[842]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:07.788480 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:07.789091 ignition[842]: kargs: kargs passed Dec 16 13:02:07.791925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:02:07.789123 ignition[842]: Ignition finished successfully Dec 16 13:02:07.794702 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:02:07.828529 ignition[848]: Ignition 2.22.0 Dec 16 13:02:07.828543 ignition[848]: Stage: disks Dec 16 13:02:07.828709 ignition[848]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:07.828719 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:07.829481 ignition[848]: disks: disks passed Dec 16 13:02:07.830593 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:02:07.829519 ignition[848]: Ignition finished successfully Dec 16 13:02:07.831811 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:02:07.832630 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:02:07.833753 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:02:07.834788 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:02:07.836155 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:02:07.838248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:02:07.875663 systemd-fsck[857]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 13:02:07.878711 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:02:07.882094 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:02:07.976602 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:02:07.977768 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:02:07.978761 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:02:07.980808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:02:07.982700 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:02:07.990310 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:02:07.992526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:02:07.993536 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:02:07.996502 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:02:08.001617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (865) Dec 16 13:02:08.001784 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:02:08.009526 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:08.009547 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:08.017611 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:02:08.017643 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:02:08.017655 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:02:08.020334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:02:08.053489 coreos-metadata[867]: Dec 16 13:02:08.053 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 13:02:08.056699 coreos-metadata[867]: Dec 16 13:02:08.054 INFO Fetch successful Dec 16 13:02:08.056699 coreos-metadata[867]: Dec 16 13:02:08.055 INFO wrote hostname ci-4459-2-2-2-e3531eb256 to /sysroot/etc/hostname Dec 16 13:02:08.057066 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:02:08.065393 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:02:08.069845 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:02:08.073354 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:02:08.077451 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:02:08.161984 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:02:08.163882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:02:08.167918 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:02:08.184601 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:08.202592 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:02:08.217341 ignition[982]: INFO : Ignition 2.22.0 Dec 16 13:02:08.217341 ignition[982]: INFO : Stage: mount Dec 16 13:02:08.219742 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:08.219742 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:08.219742 ignition[982]: INFO : mount: mount passed Dec 16 13:02:08.219742 ignition[982]: INFO : Ignition finished successfully Dec 16 13:02:08.220243 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:02:08.223703 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:02:08.318426 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:02:08.320608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:02:08.340714 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (994) Dec 16 13:02:08.344163 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:08.344203 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:08.352258 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:02:08.352301 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:02:08.352319 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:02:08.356661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:02:08.391656 ignition[1010]: INFO : Ignition 2.22.0 Dec 16 13:02:08.391656 ignition[1010]: INFO : Stage: files Dec 16 13:02:08.393620 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:08.393620 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:08.393620 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:02:08.396820 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:02:08.396820 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:02:08.399164 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:02:08.400361 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:02:08.400361 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:02:08.399728 unknown[1010]: wrote ssh authorized keys file for user: core Dec 16 13:02:08.408205 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:02:08.408205 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:02:08.542370 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:02:08.846788 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:02:08.846788 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:02:08.849044 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:02:09.106197 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:02:09.195700 systemd-networkd[825]: eth1: Gained IPv6LL Dec 16 13:02:09.218942 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:02:09.220340 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:02:09.229645 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:02:09.451802 systemd-networkd[825]: eth0: Gained IPv6LL Dec 16 13:02:09.620733 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:02:09.878442 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:02:09.878442 ignition[1010]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:02:09.882489 ignition[1010]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:02:09.884848 ignition[1010]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:02:09.884848 ignition[1010]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:02:09.884848 ignition[1010]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:02:09.890113 ignition[1010]: INFO : files: files passed Dec 16 13:02:09.890113 ignition[1010]: INFO : Ignition finished successfully Dec 16 13:02:09.887701 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:02:09.891776 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:02:09.896726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:02:09.915673 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:02:09.916676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:02:09.922737 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:09.922737 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:09.925713 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:09.925556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:02:09.927447 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:02:09.930217 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:02:09.988294 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:02:09.988462 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:02:09.990774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:02:09.992968 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:02:09.995923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:02:09.998782 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:02:10.028359 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:02:10.032282 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:02:10.068503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:02:10.071706 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:02:10.073430 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:02:10.075788 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:02:10.076057 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:02:10.078691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:02:10.080166 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:02:10.082412 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:02:10.084507 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:02:10.086608 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:02:10.089102 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:02:10.091406 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:02:10.093710 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:02:10.096271 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:02:10.098485 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:02:10.100989 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:02:10.102983 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:02:10.103168 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:02:10.105699 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:02:10.107175 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:02:10.116970 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:02:10.117252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:02:10.119232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:02:10.119596 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:02:10.122502 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:02:10.122733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:02:10.124166 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:02:10.124430 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:02:10.126068 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:02:10.126300 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:02:10.130877 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:02:10.134910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:02:10.138132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:02:10.139349 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:02:10.141692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:02:10.141936 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:02:10.153900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:02:10.154024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:02:10.179839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:02:10.183488 ignition[1065]: INFO : Ignition 2.22.0 Dec 16 13:02:10.183488 ignition[1065]: INFO : Stage: umount Dec 16 13:02:10.183488 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:10.183488 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 13:02:10.183488 ignition[1065]: INFO : umount: umount passed Dec 16 13:02:10.183488 ignition[1065]: INFO : Ignition finished successfully Dec 16 13:02:10.186940 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:02:10.187062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:02:10.191102 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:02:10.191222 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:02:10.195744 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:02:10.195859 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:02:10.203331 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:02:10.203404 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:02:10.206077 systemd[1]: Stopped target network.target - Network. Dec 16 13:02:10.208702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:02:10.208775 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:02:10.214341 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:02:10.215418 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:02:10.222009 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:02:10.223496 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:02:10.227821 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:02:10.229555 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:02:10.229683 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:02:10.231182 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:02:10.231222 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:02:10.232964 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:02:10.233042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:02:10.234729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:02:10.234781 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:02:10.236544 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:02:10.238496 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:02:10.241396 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:02:10.241525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:02:10.243241 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:02:10.243363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:02:10.246398 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:02:10.246864 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:02:10.252153 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:02:10.252512 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:02:10.252969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:02:10.255706 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:10.255976 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:02:10.256150 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:02:10.258939 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:02:10.259710 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:02:10.261049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:02:10.261089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:02:10.263453 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:02:10.264982 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:02:10.265039 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:02:10.267652 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:02:10.267705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:02:10.270432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:02:10.270487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:02:10.271339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:02:10.278516 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:02:10.288980 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:02:10.292847 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:02:10.294687 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:02:10.294800 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:02:10.296532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:02:10.296984 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:02:10.298157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:02:10.298196 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:02:10.299556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:02:10.299636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:02:10.301702 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:02:10.301755 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:02:10.303036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:02:10.303096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:02:10.305508 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:02:10.311015 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:02:10.311087 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:02:10.312715 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:02:10.312776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:02:10.314117 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:02:10.314173 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:02:10.317773 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:02:10.317829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:02:10.319982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:10.320038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:10.325947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:02:10.326068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:02:10.327653 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:02:10.329870 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:02:10.351933 systemd[1]: Switching root. Dec 16 13:02:10.399047 systemd-journald[197]: Journal stopped Dec 16 13:02:11.222587 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 16 13:02:11.222634 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:02:11.222645 kernel: SELinux: policy capability open_perms=1 Dec 16 13:02:11.222656 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:02:11.222669 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:02:11.222677 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:02:11.222685 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:02:11.222692 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:02:11.222707 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:02:11.222715 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:02:11.222723 kernel: audit: type=1403 audit(1765890130.529:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:02:11.222733 systemd[1]: Successfully loaded SELinux policy in 69.265ms. Dec 16 13:02:11.222746 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.661ms. Dec 16 13:02:11.222757 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:02:11.222766 systemd[1]: Detected virtualization kvm. Dec 16 13:02:11.222777 systemd[1]: Detected architecture x86-64. Dec 16 13:02:11.222786 systemd[1]: Detected first boot. Dec 16 13:02:11.222794 systemd[1]: Hostname set to . Dec 16 13:02:11.222802 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:02:11.222811 zram_generator::config[1109]: No configuration found. Dec 16 13:02:11.222824 kernel: Guest personality initialized and is inactive Dec 16 13:02:11.222833 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:02:11.222840 kernel: Initialized host personality Dec 16 13:02:11.222848 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:02:11.222857 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:02:11.222866 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:02:11.222874 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:02:11.222885 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:02:11.222895 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:02:11.222904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:02:11.222913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:02:11.222921 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:02:11.222930 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:02:11.222939 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:02:11.222947 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:02:11.222957 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:02:11.222965 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:02:11.222974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:02:11.222983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:02:11.222991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:02:11.223000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:02:11.223010 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:02:11.223019 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:02:11.223028 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:02:11.223037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:02:11.223046 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:02:11.223054 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:02:11.223062 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:02:11.223070 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:02:11.223079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:02:11.223088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:02:11.223097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:02:11.223105 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:02:11.223114 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:02:11.223122 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:02:11.223131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:02:11.223139 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:02:11.223148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:02:11.223156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:02:11.223165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:02:11.223205 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:02:11.223214 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:02:11.223223 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:02:11.223231 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:02:11.223240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:11.223248 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:02:11.223256 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:02:11.223267 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:02:11.223277 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:02:11.223285 systemd[1]: Reached target machines.target - Containers. Dec 16 13:02:11.223294 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:02:11.223303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:02:11.223311 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:02:11.223319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:02:11.223328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:02:11.223337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:02:11.223347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:02:11.223355 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:02:11.223363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:02:11.223372 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:02:11.223380 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:02:11.223393 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:02:11.223401 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:02:11.223410 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:02:11.223420 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:02:11.223430 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:02:11.223438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:02:11.223447 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:02:11.223456 kernel: loop: module loaded Dec 16 13:02:11.223465 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:02:11.223473 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:02:11.223483 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:02:11.223492 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:02:11.223501 systemd[1]: Stopped verity-setup.service. Dec 16 13:02:11.223511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:11.223519 kernel: fuse: init (API version 7.41) Dec 16 13:02:11.223544 systemd-journald[1190]: Collecting audit messages is disabled. Dec 16 13:02:11.224118 systemd-journald[1190]: Journal started Dec 16 13:02:11.224143 systemd-journald[1190]: Runtime Journal (/run/log/journal/13aafeee2e5c4496a1dd6bebb0c72f8c) is 4.7M, max 38.3M, 33.5M free. Dec 16 13:02:11.227620 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:02:10.990487 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:02:10.999740 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:02:11.000082 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:02:11.231656 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:02:11.234352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:02:11.235448 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:02:11.236061 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:02:11.236680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:02:11.237407 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:02:11.238532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:02:11.239417 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:02:11.240209 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:02:11.241083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:02:11.241951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:02:11.242137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:02:11.242948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:02:11.243065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:02:11.243950 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:02:11.244127 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:02:11.244841 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:02:11.244994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:02:11.245800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:02:11.246510 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:02:11.247406 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:02:11.252481 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:02:11.256412 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:02:11.257601 kernel: ACPI: bus type drm_connector registered Dec 16 13:02:11.260631 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:02:11.263645 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:02:11.265894 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:02:11.265921 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:02:11.267147 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:02:11.274214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:02:11.274781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:02:11.277681 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:02:11.279726 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:02:11.281107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:02:11.285721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:02:11.286270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:02:11.287677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:02:11.299286 systemd-journald[1190]: Time spent on flushing to /var/log/journal/13aafeee2e5c4496a1dd6bebb0c72f8c is 87.119ms for 1167 entries. Dec 16 13:02:11.299286 systemd-journald[1190]: System Journal (/var/log/journal/13aafeee2e5c4496a1dd6bebb0c72f8c) is 8M, max 584.8M, 576.8M free. Dec 16 13:02:11.401351 systemd-journald[1190]: Received client request to flush runtime journal. Dec 16 13:02:11.401387 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:02:11.401408 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:02:11.293336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:02:11.297695 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:02:11.317236 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:02:11.319970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:02:11.408664 kernel: loop1: detected capacity change from 0 to 229808 Dec 16 13:02:11.322383 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:02:11.323679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:02:11.332553 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:02:11.335012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:02:11.338403 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:02:11.346776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:02:11.366973 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 13:02:11.366983 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 13:02:11.373196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:02:11.381316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:02:11.390332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:02:11.403316 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:02:11.406051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:02:11.435254 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:02:11.438661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:02:11.450614 kernel: loop2: detected capacity change from 0 to 8 Dec 16 13:02:11.457008 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 16 13:02:11.457561 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 16 13:02:11.460830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:02:11.467808 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:02:11.506599 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:02:11.520598 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 13:02:11.544779 kernel: loop6: detected capacity change from 0 to 8 Dec 16 13:02:11.547826 kernel: loop7: detected capacity change from 0 to 110984 Dec 16 13:02:11.568592 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 13:02:11.568944 (sd-merge)[1261]: Merged extensions into '/usr'. Dec 16 13:02:11.573459 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:02:11.573550 systemd[1]: Reloading... Dec 16 13:02:11.660621 zram_generator::config[1285]: No configuration found. Dec 16 13:02:11.821500 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:02:11.822116 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:02:11.822382 systemd[1]: Reloading finished in 248 ms. Dec 16 13:02:11.835929 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:02:11.836782 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:02:11.850689 systemd[1]: Starting ensure-sysext.service... Dec 16 13:02:11.853039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:02:11.859400 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:02:11.863643 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:02:11.864054 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:02:11.864697 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:02:11.864954 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:02:11.865204 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:02:11.865466 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:02:11.865598 systemd[1]: Reloading... Dec 16 13:02:11.867190 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:02:11.867450 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Dec 16 13:02:11.868533 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Dec 16 13:02:11.871288 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:02:11.871352 systemd-tmpfiles[1331]: Skipping /boot Dec 16 13:02:11.878103 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:02:11.878178 systemd-tmpfiles[1331]: Skipping /boot Dec 16 13:02:11.895785 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Dec 16 13:02:11.923631 zram_generator::config[1359]: No configuration found. Dec 16 13:02:12.105590 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:02:12.120714 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:02:12.121218 systemd[1]: Reloading finished in 255 ms. Dec 16 13:02:12.124585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:02:12.131887 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:02:12.133718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:02:12.141307 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:02:12.147708 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:02:12.150714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:02:12.153527 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:02:12.157873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:02:12.164714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:02:12.167097 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:02:12.178747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:02:12.181356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:12.181679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:02:12.183784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:02:12.195132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:02:12.199946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:02:12.200562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:02:12.200737 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:02:12.200935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:12.207615 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:02:12.235557 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:02:12.236912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:02:12.237107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:02:12.243471 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 13:02:12.246475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:02:12.246873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:02:12.251336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:12.251529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:02:12.254909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:02:12.257934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:02:12.260118 augenrules[1478]: No rules Dec 16 13:02:12.263774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:02:12.265202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:02:12.265300 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:02:12.267175 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:02:12.268694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:02:12.269381 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:02:12.269730 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:02:12.271028 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:02:12.271631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:02:12.281681 systemd[1]: Finished ensure-sysext.service. Dec 16 13:02:12.287796 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:02:12.288887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:02:12.289018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:02:12.295714 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:02:12.296024 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:02:12.302709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:02:12.310932 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:02:12.311100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:02:12.311857 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:02:12.313927 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:02:12.314955 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:02:12.318101 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:02:12.321229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:02:12.321353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:02:12.322073 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:02:12.336209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:02:12.339506 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 13:02:12.339546 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 13:02:12.340673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:02:12.345633 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:02:12.347232 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 13:02:12.347267 kernel: [drm] features: -context_init Dec 16 13:02:12.348807 kernel: [drm] number of scanouts: 1 Dec 16 13:02:12.348851 kernel: [drm] number of cap sets: 0 Dec 16 13:02:12.351459 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 16 13:02:12.351583 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 13:02:12.355543 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:02:12.361121 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 13:02:12.373431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:02:12.378599 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:02:12.415726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:12.458431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:12.458605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:12.464364 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:12.468790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:12.480541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:12.480771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:12.487709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:12.563139 systemd-networkd[1443]: lo: Link UP Dec 16 13:02:12.563148 systemd-networkd[1443]: lo: Gained carrier Dec 16 13:02:12.565760 systemd-networkd[1443]: Enumeration completed Dec 16 13:02:12.565837 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:02:12.566999 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:12.567023 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:02:12.568666 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:02:12.570304 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:02:12.570406 systemd-networkd[1443]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:12.570409 systemd-networkd[1443]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:02:12.573251 systemd-networkd[1443]: eth0: Link UP Dec 16 13:02:12.573361 systemd-networkd[1443]: eth0: Gained carrier Dec 16 13:02:12.573373 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:12.577740 systemd-networkd[1443]: eth1: Link UP Dec 16 13:02:12.578201 systemd-networkd[1443]: eth1: Gained carrier Dec 16 13:02:12.578219 systemd-networkd[1443]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:12.594020 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:02:12.594850 systemd-resolved[1445]: Positive Trust Anchors: Dec 16 13:02:12.594862 systemd-resolved[1445]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:02:12.594887 systemd-resolved[1445]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:02:12.595329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:12.595886 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:02:12.595982 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:02:12.598839 systemd-resolved[1445]: Using system hostname 'ci-4459-2-2-2-e3531eb256'. Dec 16 13:02:12.600032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:02:12.600251 systemd[1]: Reached target network.target - Network. Dec 16 13:02:12.600310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:02:12.600364 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:02:12.600500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:02:12.600620 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:02:12.600683 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:02:12.600847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:02:12.600982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:02:12.601347 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:02:12.603649 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:02:12.603680 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:02:12.604076 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:02:12.605861 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:02:12.607159 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:02:12.607625 systemd-networkd[1443]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 13:02:12.609078 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Dec 16 13:02:12.609107 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:02:12.610318 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:02:12.610817 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:02:12.615034 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:02:12.616520 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:02:12.618659 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:02:12.619821 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:02:12.621874 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:02:12.622265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:02:12.622289 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:02:12.623089 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:02:12.626675 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:02:12.631696 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:02:12.633517 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:02:12.637849 systemd-networkd[1443]: eth0: DHCPv4 address 77.42.28.57/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 13:02:12.637915 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:02:12.639035 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Dec 16 13:02:12.639130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:02:12.640813 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:02:12.645677 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:02:12.650722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:02:12.654096 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:02:12.658534 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 13:02:12.665331 coreos-metadata[1541]: Dec 16 13:02:12.664 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 13:02:12.665331 coreos-metadata[1541]: Dec 16 13:02:12.664 INFO Fetch successful Dec 16 13:02:12.665331 coreos-metadata[1541]: Dec 16 13:02:12.664 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 13:02:12.665331 coreos-metadata[1541]: Dec 16 13:02:12.664 INFO Fetch successful Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Dec 16 13:02:12.665529 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:02:12.662302 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Dec 16 13:02:12.664276 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:02:12.664641 oslogin_cache_refresh[1548]: Failure getting users, quitting Dec 16 13:02:12.664653 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:02:12.664683 oslogin_cache_refresh[1548]: Refreshing group entry cache Dec 16 13:02:12.665301 oslogin_cache_refresh[1548]: Failure getting groups, quitting Dec 16 13:02:12.665307 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:02:12.667287 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:02:12.674289 jq[1546]: false Dec 16 13:02:12.676643 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:02:12.679448 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:02:12.682228 extend-filesystems[1547]: Found /dev/sda6 Dec 16 13:02:12.686011 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:02:12.688705 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:02:12.689873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:02:12.693592 extend-filesystems[1547]: Found /dev/sda9 Dec 16 13:02:12.695542 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:02:12.698660 extend-filesystems[1547]: Checking size of /dev/sda9 Dec 16 13:02:12.699063 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:02:12.703649 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:02:12.703913 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:02:12.704068 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:02:12.706505 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:02:12.706806 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:02:12.714827 extend-filesystems[1547]: Resized partition /dev/sda9 Dec 16 13:02:12.721527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:02:12.724046 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:02:12.727628 extend-filesystems[1588]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:02:12.732588 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 13:02:12.740346 jq[1566]: true Dec 16 13:02:12.756336 update_engine[1565]: I20251216 13:02:12.754017 1565 main.cc:92] Flatcar Update Engine starting Dec 16 13:02:12.759175 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:02:12.760283 tar[1576]: linux-amd64/LICENSE Dec 16 13:02:12.764660 tar[1576]: linux-amd64/helm Dec 16 13:02:12.776037 dbus-daemon[1542]: [system] SELinux support is enabled Dec 16 13:02:12.776144 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:02:12.780597 jq[1594]: true Dec 16 13:02:12.784172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:02:12.784213 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:02:12.788805 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:02:12.788829 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:02:12.796170 update_engine[1565]: I20251216 13:02:12.796132 1565 update_check_scheduler.cc:74] Next update check in 10m39s Dec 16 13:02:12.806271 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:02:12.816350 systemd-logind[1560]: New seat seat0. Dec 16 13:02:12.822772 systemd-logind[1560]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 13:02:12.822794 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:02:12.825080 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:02:12.825958 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:02:12.840603 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:02:12.841398 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:02:12.892001 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:02:12.892120 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:02:12.908346 systemd[1]: Starting sshkeys.service... Dec 16 13:02:12.916126 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 13:02:12.950406 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:02:12.950406 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 13:02:12.950406 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 13:02:12.950240 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:02:12.964718 extend-filesystems[1547]: Resized filesystem in /dev/sda9 Dec 16 13:02:12.951813 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:02:12.972701 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:02:12.979775 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:02:13.030155 coreos-metadata[1630]: Dec 16 13:02:13.030 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 13:02:13.031045 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:02:13.031696 coreos-metadata[1630]: Dec 16 13:02:13.031 INFO Fetch successful Dec 16 13:02:13.035626 unknown[1630]: wrote ssh authorized keys file for user: core Dec 16 13:02:13.049052 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:02:13.065917 containerd[1595]: time="2025-12-16T13:02:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:02:13.067705 containerd[1595]: time="2025-12-16T13:02:13.067683998Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:02:13.071003 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:02:13.075682 containerd[1595]: time="2025-12-16T13:02:13.075652893Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="27.782µs" Dec 16 13:02:13.075759 containerd[1595]: time="2025-12-16T13:02:13.075744826Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:02:13.075821 containerd[1595]: time="2025-12-16T13:02:13.075808716Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:02:13.075919 update-ssh-keys[1642]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:02:13.076473 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077697268Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077721814Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077748033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077801002Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077815870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.077999795Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.078014513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.078026265Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.078036134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.078109691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078657 containerd[1595]: time="2025-12-16T13:02:13.078338611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078938 containerd[1595]: time="2025-12-16T13:02:13.078366142Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:02:13.078938 containerd[1595]: time="2025-12-16T13:02:13.078375119Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:02:13.078938 containerd[1595]: time="2025-12-16T13:02:13.078395417Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:02:13.079464 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:02:13.079671 containerd[1595]: time="2025-12-16T13:02:13.079647125Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:02:13.083918 containerd[1595]: time="2025-12-16T13:02:13.082094545Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:02:13.081363 systemd[1]: Finished sshkeys.service. Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090883359Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090954823Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090970081Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090980531Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090990029Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.090998064Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091061262Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091073124Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091089635Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091140661Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091151492Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091161831Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091322352Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:02:13.092582 containerd[1595]: time="2025-12-16T13:02:13.091343872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091367667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091441265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091455451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091464428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091473546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091532086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091544158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091552954Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091560849Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091630019Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091640829Z" level=info msg="Start snapshots syncer" Dec 16 13:02:13.092832 containerd[1595]: time="2025-12-16T13:02:13.091750845Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:02:13.093004 containerd[1595]: time="2025-12-16T13:02:13.092110440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:02:13.093004 containerd[1595]: time="2025-12-16T13:02:13.092200278Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092244872Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092409400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092434979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092498348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092509148Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092525509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092536679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092544755Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092589509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092600950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092609576Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092675419Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092692602Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:02:13.093095 containerd[1595]: time="2025-12-16T13:02:13.092699956Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092707189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092713191Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092764046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092779445Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092791538Z" level=info msg="runtime interface created" Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092795354Z" level=info msg="created NRI interface" Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092801667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092829458Z" level=info msg="Connect containerd service" Dec 16 13:02:13.093286 containerd[1595]: time="2025-12-16T13:02:13.092847733Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:02:13.093665 containerd[1595]: time="2025-12-16T13:02:13.093642864Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:02:13.098622 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:02:13.098878 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:02:13.104746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:02:13.124341 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:02:13.127836 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:02:13.134753 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:02:13.135233 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:02:13.189617 containerd[1595]: time="2025-12-16T13:02:13.189518600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:02:13.189976 containerd[1595]: time="2025-12-16T13:02:13.189714147Z" level=info msg="Start subscribing containerd event" Dec 16 13:02:13.190164 containerd[1595]: time="2025-12-16T13:02:13.189758059Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:02:13.191510 containerd[1595]: time="2025-12-16T13:02:13.191463497Z" level=info msg="Start recovering state" Dec 16 13:02:13.191638 containerd[1595]: time="2025-12-16T13:02:13.191607989Z" level=info msg="Start event monitor" Dec 16 13:02:13.191638 containerd[1595]: time="2025-12-16T13:02:13.191629008Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:02:13.191638 containerd[1595]: time="2025-12-16T13:02:13.191640961Z" level=info msg="Start streaming server" Dec 16 13:02:13.191718 containerd[1595]: time="2025-12-16T13:02:13.191656379Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:02:13.191718 containerd[1595]: time="2025-12-16T13:02:13.191663643Z" level=info msg="runtime interface starting up..." Dec 16 13:02:13.191718 containerd[1595]: time="2025-12-16T13:02:13.191668813Z" level=info msg="starting plugins..." Dec 16 13:02:13.191718 containerd[1595]: time="2025-12-16T13:02:13.191682298Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:02:13.192063 containerd[1595]: time="2025-12-16T13:02:13.191808214Z" level=info msg="containerd successfully booted in 0.126666s" Dec 16 13:02:13.191897 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:02:13.265593 tar[1576]: linux-amd64/README.md Dec 16 13:02:13.282907 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:02:13.782197 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:02:13.785912 systemd[1]: Started sshd@0-77.42.28.57:22-139.178.89.65:40046.service - OpenSSH per-connection server daemon (139.178.89.65:40046). Dec 16 13:02:14.379879 systemd-networkd[1443]: eth1: Gained IPv6LL Dec 16 13:02:14.380697 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Dec 16 13:02:14.383990 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:02:14.387279 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:02:14.395959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:14.405103 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:02:14.441701 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:02:14.571900 systemd-networkd[1443]: eth0: Gained IPv6LL Dec 16 13:02:14.572874 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Dec 16 13:02:14.956743 sshd[1677]: Accepted publickey for core from 139.178.89.65 port 40046 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:14.960182 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:14.971125 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:02:14.977238 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:02:14.993982 systemd-logind[1560]: New session 1 of user core. Dec 16 13:02:15.001521 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:02:15.009312 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:02:15.021715 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:02:15.024875 systemd-logind[1560]: New session c1 of user core. Dec 16 13:02:15.185917 systemd[1694]: Queued start job for default target default.target. Dec 16 13:02:15.190720 systemd[1694]: Created slice app.slice - User Application Slice. Dec 16 13:02:15.190744 systemd[1694]: Reached target paths.target - Paths. Dec 16 13:02:15.190849 systemd[1694]: Reached target timers.target - Timers. Dec 16 13:02:15.192028 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:02:15.201367 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:02:15.202055 systemd[1694]: Reached target sockets.target - Sockets. Dec 16 13:02:15.202167 systemd[1694]: Reached target basic.target - Basic System. Dec 16 13:02:15.202239 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:02:15.204215 systemd[1694]: Reached target default.target - Main User Target. Dec 16 13:02:15.204252 systemd[1694]: Startup finished in 165ms. Dec 16 13:02:15.207699 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:02:15.649690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:15.652411 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:02:15.657539 systemd[1]: Startup finished in 3.201s (kernel) + 5.867s (initrd) + 5.195s (userspace) = 14.263s. Dec 16 13:02:15.664151 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:02:15.969960 systemd[1]: Started sshd@1-77.42.28.57:22-139.178.89.65:40060.service - OpenSSH per-connection server daemon (139.178.89.65:40060). Dec 16 13:02:16.443368 kubelet[1709]: E1216 13:02:16.443232 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:02:16.445703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:02:16.445837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:02:16.446128 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 268.6M memory peak. Dec 16 13:02:17.023397 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 40060 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:17.026283 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:17.035624 systemd-logind[1560]: New session 2 of user core. Dec 16 13:02:17.042832 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:02:17.743303 sshd[1724]: Connection closed by 139.178.89.65 port 40060 Dec 16 13:02:17.744104 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:17.749789 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:02:17.749917 systemd[1]: sshd@1-77.42.28.57:22-139.178.89.65:40060.service: Deactivated successfully. Dec 16 13:02:17.752679 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:02:17.754707 systemd-logind[1560]: Removed session 2. Dec 16 13:02:17.930789 systemd[1]: Started sshd@2-77.42.28.57:22-139.178.89.65:40062.service - OpenSSH per-connection server daemon (139.178.89.65:40062). Dec 16 13:02:19.002147 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 40062 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:19.003975 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:19.011913 systemd-logind[1560]: New session 3 of user core. Dec 16 13:02:19.017780 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:02:19.723181 sshd[1733]: Connection closed by 139.178.89.65 port 40062 Dec 16 13:02:19.723936 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:19.729735 systemd[1]: sshd@2-77.42.28.57:22-139.178.89.65:40062.service: Deactivated successfully. Dec 16 13:02:19.733101 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:02:19.734992 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:02:19.737327 systemd-logind[1560]: Removed session 3. Dec 16 13:02:19.910940 systemd[1]: Started sshd@3-77.42.28.57:22-139.178.89.65:40066.service - OpenSSH per-connection server daemon (139.178.89.65:40066). Dec 16 13:02:20.968406 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 40066 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:20.970453 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:20.978875 systemd-logind[1560]: New session 4 of user core. Dec 16 13:02:20.984790 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:02:21.683539 sshd[1742]: Connection closed by 139.178.89.65 port 40066 Dec 16 13:02:21.684266 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:21.688667 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:02:21.689508 systemd[1]: sshd@3-77.42.28.57:22-139.178.89.65:40066.service: Deactivated successfully. Dec 16 13:02:21.691276 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:02:21.692843 systemd-logind[1560]: Removed session 4. Dec 16 13:02:21.870794 systemd[1]: Started sshd@4-77.42.28.57:22-139.178.89.65:60044.service - OpenSSH per-connection server daemon (139.178.89.65:60044). Dec 16 13:02:22.920139 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 60044 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:22.921538 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:22.926029 systemd-logind[1560]: New session 5 of user core. Dec 16 13:02:22.932737 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:02:23.490401 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:02:23.490944 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:02:23.506021 sudo[1752]: pam_unix(sudo:session): session closed for user root Dec 16 13:02:23.677004 sshd[1751]: Connection closed by 139.178.89.65 port 60044 Dec 16 13:02:23.678021 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:23.683311 systemd[1]: sshd@4-77.42.28.57:22-139.178.89.65:60044.service: Deactivated successfully. Dec 16 13:02:23.686222 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:02:23.688297 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:02:23.691382 systemd-logind[1560]: Removed session 5. Dec 16 13:02:23.897294 systemd[1]: Started sshd@5-77.42.28.57:22-139.178.89.65:60052.service - OpenSSH per-connection server daemon (139.178.89.65:60052). Dec 16 13:02:25.062282 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 60052 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:25.064361 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:25.074665 systemd-logind[1560]: New session 6 of user core. Dec 16 13:02:25.081812 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:02:25.674690 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:02:25.675182 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:02:25.682184 sudo[1763]: pam_unix(sudo:session): session closed for user root Dec 16 13:02:25.689129 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:02:25.689541 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:02:25.702417 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:02:25.752209 augenrules[1785]: No rules Dec 16 13:02:25.753108 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:02:25.753431 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:02:25.754528 sudo[1762]: pam_unix(sudo:session): session closed for user root Dec 16 13:02:25.942682 sshd[1761]: Connection closed by 139.178.89.65 port 60052 Dec 16 13:02:25.943778 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:25.948688 systemd[1]: sshd@5-77.42.28.57:22-139.178.89.65:60052.service: Deactivated successfully. Dec 16 13:02:25.950660 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:02:25.953019 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:02:25.954940 systemd-logind[1560]: Removed session 6. Dec 16 13:02:26.103664 systemd[1]: Started sshd@6-77.42.28.57:22-139.178.89.65:60064.service - OpenSSH per-connection server daemon (139.178.89.65:60064). Dec 16 13:02:26.696435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:02:26.698087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:26.833169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:26.847184 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:02:26.924445 kubelet[1804]: E1216 13:02:26.924327 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:02:26.930089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:02:26.930328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:02:26.931034 systemd[1]: kubelet.service: Consumed 184ms CPU time, 110.9M memory peak. Dec 16 13:02:27.176408 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 60064 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:02:27.178327 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:27.187496 systemd-logind[1560]: New session 7 of user core. Dec 16 13:02:27.192818 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:02:27.724341 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:02:27.724613 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:02:28.033218 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:02:28.053884 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:02:28.261553 dockerd[1831]: time="2025-12-16T13:02:28.261302023Z" level=info msg="Starting up" Dec 16 13:02:28.263276 dockerd[1831]: time="2025-12-16T13:02:28.263244686Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:02:28.270965 dockerd[1831]: time="2025-12-16T13:02:28.270934789Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:02:28.319850 dockerd[1831]: time="2025-12-16T13:02:28.319631560Z" level=info msg="Loading containers: start." Dec 16 13:02:28.332600 kernel: Initializing XFRM netlink socket Dec 16 13:02:28.518205 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Dec 16 13:02:28.993365 systemd-timesyncd[1488]: Contacted time server 195.201.20.16:123 (2.flatcar.pool.ntp.org). Dec 16 13:02:28.993626 systemd-timesyncd[1488]: Initial clock synchronization to Tue 2025-12-16 13:02:28.993123 UTC. Dec 16 13:02:28.994644 systemd-resolved[1445]: Clock change detected. Flushing caches. Dec 16 13:02:29.005897 systemd-networkd[1443]: docker0: Link UP Dec 16 13:02:29.011154 dockerd[1831]: time="2025-12-16T13:02:29.011104454Z" level=info msg="Loading containers: done." Dec 16 13:02:29.026375 dockerd[1831]: time="2025-12-16T13:02:29.026284284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:02:29.026375 dockerd[1831]: time="2025-12-16T13:02:29.026372068Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:02:29.026514 dockerd[1831]: time="2025-12-16T13:02:29.026460755Z" level=info msg="Initializing buildkit" Dec 16 13:02:29.055790 dockerd[1831]: time="2025-12-16T13:02:29.055744417Z" level=info msg="Completed buildkit initialization" Dec 16 13:02:29.062631 dockerd[1831]: time="2025-12-16T13:02:29.062579316Z" level=info msg="Daemon has completed initialization" Dec 16 13:02:29.063143 dockerd[1831]: time="2025-12-16T13:02:29.062726371Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:02:29.062790 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:02:30.324319 containerd[1595]: time="2025-12-16T13:02:30.324262325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:02:30.935336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478511960.mount: Deactivated successfully. Dec 16 13:02:32.234741 containerd[1595]: time="2025-12-16T13:02:32.234683349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:32.235804 containerd[1595]: time="2025-12-16T13:02:32.235576014Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Dec 16 13:02:32.236643 containerd[1595]: time="2025-12-16T13:02:32.236615333Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:32.238794 containerd[1595]: time="2025-12-16T13:02:32.238765144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:32.239600 containerd[1595]: time="2025-12-16T13:02:32.239543254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.915234052s" Dec 16 13:02:32.239685 containerd[1595]: time="2025-12-16T13:02:32.239669761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:02:32.240306 containerd[1595]: time="2025-12-16T13:02:32.240256051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:02:33.458780 containerd[1595]: time="2025-12-16T13:02:33.458722146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:33.459821 containerd[1595]: time="2025-12-16T13:02:33.459693659Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Dec 16 13:02:33.460605 containerd[1595]: time="2025-12-16T13:02:33.460576013Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:33.462617 containerd[1595]: time="2025-12-16T13:02:33.462594660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:33.463377 containerd[1595]: time="2025-12-16T13:02:33.463352020Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.22307022s" Dec 16 13:02:33.463417 containerd[1595]: time="2025-12-16T13:02:33.463378870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:02:33.464303 containerd[1595]: time="2025-12-16T13:02:33.464169703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:02:34.563547 containerd[1595]: time="2025-12-16T13:02:34.563451147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:34.564538 containerd[1595]: time="2025-12-16T13:02:34.564350884Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Dec 16 13:02:34.565318 containerd[1595]: time="2025-12-16T13:02:34.565283654Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:34.567378 containerd[1595]: time="2025-12-16T13:02:34.567353636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:34.568110 containerd[1595]: time="2025-12-16T13:02:34.568072223Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.103879327s" Dec 16 13:02:34.568178 containerd[1595]: time="2025-12-16T13:02:34.568165839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:02:34.568804 containerd[1595]: time="2025-12-16T13:02:34.568776304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:02:35.840972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49864972.mount: Deactivated successfully. Dec 16 13:02:36.169802 containerd[1595]: time="2025-12-16T13:02:36.169581701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:36.170893 containerd[1595]: time="2025-12-16T13:02:36.170794856Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Dec 16 13:02:36.171835 containerd[1595]: time="2025-12-16T13:02:36.171807666Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:36.173858 containerd[1595]: time="2025-12-16T13:02:36.173830450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:36.174336 containerd[1595]: time="2025-12-16T13:02:36.174313115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.605507876s" Dec 16 13:02:36.174405 containerd[1595]: time="2025-12-16T13:02:36.174391552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:02:36.174892 containerd[1595]: time="2025-12-16T13:02:36.174859800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:02:36.667244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960900547.mount: Deactivated successfully. Dec 16 13:02:37.482847 containerd[1595]: time="2025-12-16T13:02:37.481975619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:37.482847 containerd[1595]: time="2025-12-16T13:02:37.482818820Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Dec 16 13:02:37.483665 containerd[1595]: time="2025-12-16T13:02:37.483638368Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:37.485857 containerd[1595]: time="2025-12-16T13:02:37.485826050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:37.486879 containerd[1595]: time="2025-12-16T13:02:37.486850933Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.311962839s" Dec 16 13:02:37.487007 containerd[1595]: time="2025-12-16T13:02:37.486989984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:02:37.487998 containerd[1595]: time="2025-12-16T13:02:37.487914758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:02:37.624786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:02:37.626214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:37.741926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:37.752020 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:02:37.803024 kubelet[2176]: E1216 13:02:37.802961 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:02:37.805829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:02:37.805969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:02:37.806210 systemd[1]: kubelet.service: Consumed 138ms CPU time, 110M memory peak. Dec 16 13:02:37.939020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185785880.mount: Deactivated successfully. Dec 16 13:02:37.945037 containerd[1595]: time="2025-12-16T13:02:37.944970390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:02:37.945906 containerd[1595]: time="2025-12-16T13:02:37.945767846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Dec 16 13:02:37.946665 containerd[1595]: time="2025-12-16T13:02:37.946626517Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:02:37.949541 containerd[1595]: time="2025-12-16T13:02:37.948832123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:02:37.949541 containerd[1595]: time="2025-12-16T13:02:37.949425977Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.449574ms" Dec 16 13:02:37.949541 containerd[1595]: time="2025-12-16T13:02:37.949457676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:02:37.949966 containerd[1595]: time="2025-12-16T13:02:37.949924071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:02:38.426059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747675765.mount: Deactivated successfully. Dec 16 13:02:42.665462 containerd[1595]: time="2025-12-16T13:02:42.665408139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:42.666965 containerd[1595]: time="2025-12-16T13:02:42.666710763Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Dec 16 13:02:42.668063 containerd[1595]: time="2025-12-16T13:02:42.668037060Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:42.671499 containerd[1595]: time="2025-12-16T13:02:42.671473394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:02:42.672610 containerd[1595]: time="2025-12-16T13:02:42.672583427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.722603711s" Dec 16 13:02:42.672719 containerd[1595]: time="2025-12-16T13:02:42.672702200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:02:47.970786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:02:47.975925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:48.005443 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:02:48.005510 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:02:48.005783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:48.009806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:48.037348 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... Dec 16 13:02:48.037367 systemd[1]: Reloading... Dec 16 13:02:48.132636 zram_generator::config[2319]: No configuration found. Dec 16 13:02:48.306587 systemd[1]: Reloading finished in 268 ms. Dec 16 13:02:48.360882 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:02:48.360974 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:02:48.361268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:48.361323 systemd[1]: kubelet.service: Consumed 80ms CPU time, 97.7M memory peak. Dec 16 13:02:48.362704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:48.447613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:48.452868 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:02:48.513910 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:02:48.513910 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:02:48.513910 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:02:48.516553 kubelet[2373]: I1216 13:02:48.516487 2373 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:02:49.833594 kubelet[2373]: I1216 13:02:49.833528 2373 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:02:49.833931 kubelet[2373]: I1216 13:02:49.833624 2373 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:02:49.834024 kubelet[2373]: I1216 13:02:49.833987 2373 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:02:49.870597 kubelet[2373]: E1216 13:02:49.870505 2373 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://77.42.28.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:02:49.875076 kubelet[2373]: I1216 13:02:49.874981 2373 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:02:49.898270 kubelet[2373]: I1216 13:02:49.898179 2373 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:02:49.912433 kubelet[2373]: I1216 13:02:49.912404 2373 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:02:49.914604 kubelet[2373]: I1216 13:02:49.914516 2373 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:02:49.917546 kubelet[2373]: I1216 13:02:49.914568 2373 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-2-e3531eb256","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:02:49.917546 kubelet[2373]: I1216 13:02:49.917510 2373 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:02:49.917546 kubelet[2373]: I1216 13:02:49.917522 2373 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:02:49.918259 kubelet[2373]: I1216 13:02:49.918207 2373 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:02:49.922651 kubelet[2373]: I1216 13:02:49.921760 2373 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:02:49.922651 kubelet[2373]: I1216 13:02:49.921789 2373 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:02:49.922651 kubelet[2373]: I1216 13:02:49.921817 2373 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:02:49.922651 kubelet[2373]: I1216 13:02:49.921832 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:02:49.931554 kubelet[2373]: E1216 13:02:49.931514 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.28.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-2-e3531eb256&limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:02:49.931857 kubelet[2373]: I1216 13:02:49.931822 2373 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:02:49.932808 kubelet[2373]: I1216 13:02:49.932777 2373 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:02:49.933994 kubelet[2373]: W1216 13:02:49.933970 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:02:49.942230 kubelet[2373]: E1216 13:02:49.942183 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.28.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:02:49.943542 kubelet[2373]: I1216 13:02:49.943451 2373 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:02:49.944165 kubelet[2373]: I1216 13:02:49.944134 2373 server.go:1289] "Started kubelet" Dec 16 13:02:49.947619 kubelet[2373]: I1216 13:02:49.947502 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:02:49.957353 kubelet[2373]: I1216 13:02:49.957033 2373 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:02:49.960528 kubelet[2373]: E1216 13:02:49.953979 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://77.42.28.57:6443/api/v1/namespaces/default/events\": dial tcp 77.42.28.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-2-e3531eb256.1881b3c0e3d9dc86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-2-e3531eb256,UID:ci-4459-2-2-2-e3531eb256,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-2-e3531eb256,},FirstTimestamp:2025-12-16 13:02:49.943743622 +0000 UTC m=+1.486573915,LastTimestamp:2025-12-16 13:02:49.943743622 +0000 UTC m=+1.486573915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-2-e3531eb256,}" Dec 16 13:02:49.963180 kubelet[2373]: I1216 13:02:49.963066 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:02:49.969100 kubelet[2373]: I1216 13:02:49.967782 2373 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:02:49.969526 kubelet[2373]: I1216 13:02:49.969468 2373 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:02:49.972659 kubelet[2373]: E1216 13:02:49.969846 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:49.972659 kubelet[2373]: I1216 13:02:49.961427 2373 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:02:49.978868 kubelet[2373]: I1216 13:02:49.978830 2373 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:02:49.979007 kubelet[2373]: I1216 13:02:49.978897 2373 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:02:49.979305 kubelet[2373]: I1216 13:02:49.979261 2373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:02:49.990637 kubelet[2373]: E1216 13:02:49.990513 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.28.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-2-e3531eb256?timeout=10s\": dial tcp 77.42.28.57:6443: connect: connection refused" interval="200ms" Dec 16 13:02:49.992008 kubelet[2373]: E1216 13:02:49.991964 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.28.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:02:49.992682 kubelet[2373]: I1216 13:02:49.992553 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:02:49.992883 kubelet[2373]: I1216 13:02:49.992856 2373 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:02:49.993100 kubelet[2373]: I1216 13:02:49.992999 2373 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:02:49.993899 kubelet[2373]: I1216 13:02:49.993885 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:02:49.994195 kubelet[2373]: I1216 13:02:49.993967 2373 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:02:49.994195 kubelet[2373]: I1216 13:02:49.993988 2373 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:02:49.994195 kubelet[2373]: I1216 13:02:49.993993 2373 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:02:49.994195 kubelet[2373]: E1216 13:02:49.994023 2373 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:02:49.997536 kubelet[2373]: E1216 13:02:49.997516 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.28.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:02:49.997714 kubelet[2373]: E1216 13:02:49.997700 2373 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:02:49.999613 kubelet[2373]: I1216 13:02:49.998276 2373 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:02:50.019689 kubelet[2373]: I1216 13:02:50.019638 2373 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:02:50.019689 kubelet[2373]: I1216 13:02:50.019663 2373 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:02:50.019689 kubelet[2373]: I1216 13:02:50.019683 2373 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:02:50.022011 kubelet[2373]: I1216 13:02:50.021982 2373 policy_none.go:49] "None policy: Start" Dec 16 13:02:50.022011 kubelet[2373]: I1216 13:02:50.022009 2373 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:02:50.022109 kubelet[2373]: I1216 13:02:50.022022 2373 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:02:50.029885 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:02:50.039461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:02:50.043723 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:02:50.063362 kubelet[2373]: E1216 13:02:50.063329 2373 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:02:50.063661 kubelet[2373]: I1216 13:02:50.063649 2373 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:02:50.063741 kubelet[2373]: I1216 13:02:50.063712 2373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:02:50.064718 kubelet[2373]: I1216 13:02:50.064705 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:02:50.066621 kubelet[2373]: E1216 13:02:50.066489 2373 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:02:50.066924 kubelet[2373]: E1216 13:02:50.066901 2373 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:50.110649 systemd[1]: Created slice kubepods-burstable-pod49ac469dd2c2da756bc4adea2f26403c.slice - libcontainer container kubepods-burstable-pod49ac469dd2c2da756bc4adea2f26403c.slice. Dec 16 13:02:50.120776 kubelet[2373]: E1216 13:02:50.120716 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.124801 systemd[1]: Created slice kubepods-burstable-podd061f4c845943bfa636a9cf2d2f6cc4a.slice - libcontainer container kubepods-burstable-podd061f4c845943bfa636a9cf2d2f6cc4a.slice. Dec 16 13:02:50.142685 kubelet[2373]: E1216 13:02:50.142638 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.146855 systemd[1]: Created slice kubepods-burstable-pod12c4acd8ff3261acde67fc24c5e9ac61.slice - libcontainer container kubepods-burstable-pod12c4acd8ff3261acde67fc24c5e9ac61.slice. Dec 16 13:02:50.149760 kubelet[2373]: E1216 13:02:50.149709 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.167547 kubelet[2373]: I1216 13:02:50.166966 2373 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.167547 kubelet[2373]: E1216 13:02:50.167502 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.28.57:6443/api/v1/nodes\": dial tcp 77.42.28.57:6443: connect: connection refused" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.192528 kubelet[2373]: E1216 13:02:50.192482 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.28.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-2-e3531eb256?timeout=10s\": dial tcp 77.42.28.57:6443: connect: connection refused" interval="400ms" Dec 16 13:02:50.280178 kubelet[2373]: I1216 13:02:50.280045 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280178 kubelet[2373]: I1216 13:02:50.280175 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280462 kubelet[2373]: I1216 13:02:50.280206 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280462 kubelet[2373]: I1216 13:02:50.280232 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280462 kubelet[2373]: I1216 13:02:50.280253 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280462 kubelet[2373]: I1216 13:02:50.280276 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280462 kubelet[2373]: I1216 13:02:50.280303 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49ac469dd2c2da756bc4adea2f26403c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-2-e3531eb256\" (UID: \"49ac469dd2c2da756bc4adea2f26403c\") " pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280713 kubelet[2373]: I1216 13:02:50.280362 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.280713 kubelet[2373]: I1216 13:02:50.280386 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.370335 kubelet[2373]: I1216 13:02:50.370191 2373 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.371058 kubelet[2373]: E1216 13:02:50.370971 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.28.57:6443/api/v1/nodes\": dial tcp 77.42.28.57:6443: connect: connection refused" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.424049 containerd[1595]: time="2025-12-16T13:02:50.422969284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-2-e3531eb256,Uid:49ac469dd2c2da756bc4adea2f26403c,Namespace:kube-system,Attempt:0,}" Dec 16 13:02:50.456067 containerd[1595]: time="2025-12-16T13:02:50.456004542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-2-e3531eb256,Uid:d061f4c845943bfa636a9cf2d2f6cc4a,Namespace:kube-system,Attempt:0,}" Dec 16 13:02:50.459932 containerd[1595]: time="2025-12-16T13:02:50.459880471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-2-e3531eb256,Uid:12c4acd8ff3261acde67fc24c5e9ac61,Namespace:kube-system,Attempt:0,}" Dec 16 13:02:50.590482 containerd[1595]: time="2025-12-16T13:02:50.590260350Z" level=info msg="connecting to shim 6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5" address="unix:///run/containerd/s/0fc2bc1e27919f6f2e62dff5df2cc0c28866d548b437e416239017169291c9ea" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:02:50.591099 containerd[1595]: time="2025-12-16T13:02:50.590328428Z" level=info msg="connecting to shim 93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752" address="unix:///run/containerd/s/0fbd42f5f3bcb4ed765dd49f6d98f9d84ebe6c117c265fdc1e1350368ecf4751" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:02:50.594959 kubelet[2373]: E1216 13:02:50.594751 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.28.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-2-e3531eb256?timeout=10s\": dial tcp 77.42.28.57:6443: connect: connection refused" interval="800ms" Dec 16 13:02:50.598110 containerd[1595]: time="2025-12-16T13:02:50.598064427Z" level=info msg="connecting to shim 6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f" address="unix:///run/containerd/s/b1b26c5fc6f272c976a3d1974c5426c5b1d2bc79eb0469809d9a42aa176340cd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:02:50.694809 systemd[1]: Started cri-containerd-6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5.scope - libcontainer container 6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5. Dec 16 13:02:50.700307 systemd[1]: Started cri-containerd-6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f.scope - libcontainer container 6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f. Dec 16 13:02:50.702212 systemd[1]: Started cri-containerd-93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752.scope - libcontainer container 93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752. Dec 16 13:02:50.770778 containerd[1595]: time="2025-12-16T13:02:50.770743428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-2-e3531eb256,Uid:49ac469dd2c2da756bc4adea2f26403c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5\"" Dec 16 13:02:50.781579 kubelet[2373]: I1216 13:02:50.780728 2373 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.783586 kubelet[2373]: E1216 13:02:50.783450 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.28.57:6443/api/v1/nodes\": dial tcp 77.42.28.57:6443: connect: connection refused" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:50.793086 containerd[1595]: time="2025-12-16T13:02:50.793050145Z" level=info msg="CreateContainer within sandbox \"6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:02:50.805403 containerd[1595]: time="2025-12-16T13:02:50.805365953Z" level=info msg="Container c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:02:50.810341 containerd[1595]: time="2025-12-16T13:02:50.810300006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-2-e3531eb256,Uid:d061f4c845943bfa636a9cf2d2f6cc4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752\"" Dec 16 13:02:50.810408 containerd[1595]: time="2025-12-16T13:02:50.810376170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-2-e3531eb256,Uid:12c4acd8ff3261acde67fc24c5e9ac61,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f\"" Dec 16 13:02:50.813816 containerd[1595]: time="2025-12-16T13:02:50.813785123Z" level=info msg="CreateContainer within sandbox \"6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e\"" Dec 16 13:02:50.814168 containerd[1595]: time="2025-12-16T13:02:50.814150779Z" level=info msg="StartContainer for \"c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e\"" Dec 16 13:02:50.815172 containerd[1595]: time="2025-12-16T13:02:50.815129013Z" level=info msg="connecting to shim c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e" address="unix:///run/containerd/s/0fc2bc1e27919f6f2e62dff5df2cc0c28866d548b437e416239017169291c9ea" protocol=ttrpc version=3 Dec 16 13:02:50.817011 containerd[1595]: time="2025-12-16T13:02:50.816994212Z" level=info msg="CreateContainer within sandbox \"93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:02:50.824668 containerd[1595]: time="2025-12-16T13:02:50.824623842Z" level=info msg="Container 4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:02:50.833578 containerd[1595]: time="2025-12-16T13:02:50.833208882Z" level=info msg="CreateContainer within sandbox \"6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:02:50.835079 systemd[1]: Started cri-containerd-c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e.scope - libcontainer container c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e. Dec 16 13:02:50.840485 containerd[1595]: time="2025-12-16T13:02:50.840427581Z" level=info msg="CreateContainer within sandbox \"93ee944b6116171dc071dfad366f0d6d347fa22a2006b8338e826713fb122752\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b\"" Dec 16 13:02:50.840920 containerd[1595]: time="2025-12-16T13:02:50.840902241Z" level=info msg="StartContainer for \"4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b\"" Dec 16 13:02:50.842108 containerd[1595]: time="2025-12-16T13:02:50.841744121Z" level=info msg="connecting to shim 4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b" address="unix:///run/containerd/s/0fbd42f5f3bcb4ed765dd49f6d98f9d84ebe6c117c265fdc1e1350368ecf4751" protocol=ttrpc version=3 Dec 16 13:02:50.843941 containerd[1595]: time="2025-12-16T13:02:50.843900044Z" level=info msg="Container 8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:02:50.850680 containerd[1595]: time="2025-12-16T13:02:50.849390311Z" level=info msg="CreateContainer within sandbox \"6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297\"" Dec 16 13:02:50.851628 containerd[1595]: time="2025-12-16T13:02:50.851611206Z" level=info msg="StartContainer for \"8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297\"" Dec 16 13:02:50.853802 containerd[1595]: time="2025-12-16T13:02:50.853770165Z" level=info msg="connecting to shim 8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297" address="unix:///run/containerd/s/b1b26c5fc6f272c976a3d1974c5426c5b1d2bc79eb0469809d9a42aa176340cd" protocol=ttrpc version=3 Dec 16 13:02:50.863697 systemd[1]: Started cri-containerd-4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b.scope - libcontainer container 4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b. Dec 16 13:02:50.870702 systemd[1]: Started cri-containerd-8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297.scope - libcontainer container 8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297. Dec 16 13:02:50.907719 containerd[1595]: time="2025-12-16T13:02:50.907631964Z" level=info msg="StartContainer for \"c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e\" returns successfully" Dec 16 13:02:50.946134 containerd[1595]: time="2025-12-16T13:02:50.945812021Z" level=info msg="StartContainer for \"8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297\" returns successfully" Dec 16 13:02:50.953410 containerd[1595]: time="2025-12-16T13:02:50.953297640Z" level=info msg="StartContainer for \"4b672b6d68e52e9ca9a2725e64baf8dd9ccd09299716d94234bbbe7b65eef36b\" returns successfully" Dec 16 13:02:51.019419 kubelet[2373]: E1216 13:02:51.019248 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:51.020461 kubelet[2373]: E1216 13:02:51.019635 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:51.025267 kubelet[2373]: E1216 13:02:51.025243 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:51.050723 kubelet[2373]: E1216 13:02:51.050687 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.28.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:02:51.079010 kubelet[2373]: E1216 13:02:51.078971 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.28.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:02:51.165813 kubelet[2373]: E1216 13:02:51.165775 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.28.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-2-e3531eb256&limit=500&resourceVersion=0\": dial tcp 77.42.28.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:02:51.586185 kubelet[2373]: I1216 13:02:51.586142 2373 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:52.025946 kubelet[2373]: E1216 13:02:52.025910 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:52.026331 kubelet[2373]: E1216 13:02:52.026211 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:52.465108 kubelet[2373]: E1216 13:02:52.464986 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:52.566321 kubelet[2373]: I1216 13:02:52.566176 2373 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:52.566321 kubelet[2373]: E1216 13:02:52.566216 2373 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-2-e3531eb256\": node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:52.591598 kubelet[2373]: E1216 13:02:52.591520 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:52.692069 kubelet[2373]: E1216 13:02:52.692028 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:52.793165 kubelet[2373]: E1216 13:02:52.793102 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:52.893379 kubelet[2373]: E1216 13:02:52.893269 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:52.994257 kubelet[2373]: E1216 13:02:52.994198 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.029239 kubelet[2373]: E1216 13:02:53.029160 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:53.060173 kubelet[2373]: E1216 13:02:53.060013 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-2-e3531eb256\" not found" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:53.094444 kubelet[2373]: E1216 13:02:53.094357 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.194972 kubelet[2373]: E1216 13:02:53.194898 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.295700 kubelet[2373]: E1216 13:02:53.295644 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.396269 kubelet[2373]: E1216 13:02:53.396106 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.496337 kubelet[2373]: E1216 13:02:53.496285 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.597400 kubelet[2373]: E1216 13:02:53.597352 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-2-e3531eb256\" not found" Dec 16 13:02:53.670838 kubelet[2373]: I1216 13:02:53.670725 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:53.687585 kubelet[2373]: I1216 13:02:53.687274 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:53.695811 kubelet[2373]: I1216 13:02:53.695778 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:53.941664 kubelet[2373]: I1216 13:02:53.941541 2373 apiserver.go:52] "Watching apiserver" Dec 16 13:02:53.979581 kubelet[2373]: I1216 13:02:53.979534 2373 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:02:54.921632 systemd[1]: Reload requested from client PID 2654 ('systemctl') (unit session-7.scope)... Dec 16 13:02:54.921660 systemd[1]: Reloading... Dec 16 13:02:55.086600 zram_generator::config[2722]: No configuration found. Dec 16 13:02:55.290764 systemd[1]: Reloading finished in 368 ms. Dec 16 13:02:55.328357 kubelet[2373]: I1216 13:02:55.328030 2373 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:02:55.328183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:55.348193 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:02:55.348377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:55.348431 systemd[1]: kubelet.service: Consumed 1.843s CPU time, 126.3M memory peak. Dec 16 13:02:55.350426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:02:55.485506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:02:55.495013 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:02:55.548530 kubelet[2750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:02:55.548530 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:02:55.548530 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:02:55.548530 kubelet[2750]: I1216 13:02:55.547919 2750 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:02:55.555415 kubelet[2750]: I1216 13:02:55.555383 2750 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:02:55.555415 kubelet[2750]: I1216 13:02:55.555406 2750 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:02:55.555668 kubelet[2750]: I1216 13:02:55.555632 2750 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:02:55.556703 kubelet[2750]: I1216 13:02:55.556665 2750 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:02:55.559588 kubelet[2750]: I1216 13:02:55.559530 2750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:02:55.566431 kubelet[2750]: I1216 13:02:55.566071 2750 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:02:55.570857 kubelet[2750]: I1216 13:02:55.570830 2750 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:02:55.571204 kubelet[2750]: I1216 13:02:55.571175 2750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:02:55.571337 kubelet[2750]: I1216 13:02:55.571201 2750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-2-e3531eb256","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:02:55.571424 kubelet[2750]: I1216 13:02:55.571342 2750 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:02:55.571424 kubelet[2750]: I1216 13:02:55.571352 2750 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:02:55.571424 kubelet[2750]: I1216 13:02:55.571384 2750 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:02:55.571514 kubelet[2750]: I1216 13:02:55.571495 2750 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:02:55.571514 kubelet[2750]: I1216 13:02:55.571514 2750 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:02:55.571593 kubelet[2750]: I1216 13:02:55.571530 2750 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:02:55.574551 kubelet[2750]: I1216 13:02:55.574520 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:02:55.579667 kubelet[2750]: I1216 13:02:55.579626 2750 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:02:55.580274 kubelet[2750]: I1216 13:02:55.580258 2750 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:02:55.582909 kubelet[2750]: I1216 13:02:55.582897 2750 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:02:55.583073 kubelet[2750]: I1216 13:02:55.583017 2750 server.go:1289] "Started kubelet" Dec 16 13:02:55.584744 kubelet[2750]: I1216 13:02:55.584730 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:02:55.592255 kubelet[2750]: I1216 13:02:55.592221 2750 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:02:55.593831 kubelet[2750]: I1216 13:02:55.593666 2750 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:02:55.594788 kubelet[2750]: I1216 13:02:55.594765 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:02:55.596608 kubelet[2750]: I1216 13:02:55.596533 2750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:02:55.596786 kubelet[2750]: I1216 13:02:55.596765 2750 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:02:55.596964 kubelet[2750]: I1216 13:02:55.596940 2750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:02:55.597573 kubelet[2750]: I1216 13:02:55.597521 2750 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:02:55.598993 kubelet[2750]: I1216 13:02:55.598961 2750 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:02:55.601574 kubelet[2750]: I1216 13:02:55.601513 2750 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:02:55.602416 kubelet[2750]: I1216 13:02:55.602223 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:02:55.602416 kubelet[2750]: I1216 13:02:55.602246 2750 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:02:55.602416 kubelet[2750]: I1216 13:02:55.602260 2750 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:02:55.602416 kubelet[2750]: I1216 13:02:55.602265 2750 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:02:55.602416 kubelet[2750]: E1216 13:02:55.602292 2750 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:02:55.603446 kubelet[2750]: I1216 13:02:55.602732 2750 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:02:55.603630 kubelet[2750]: I1216 13:02:55.603608 2750 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:02:55.604390 kubelet[2750]: E1216 13:02:55.604374 2750 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:02:55.606584 kubelet[2750]: I1216 13:02:55.605526 2750 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:02:55.654032 kubelet[2750]: I1216 13:02:55.654010 2750 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:02:55.654340 kubelet[2750]: I1216 13:02:55.654212 2750 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:02:55.654437 kubelet[2750]: I1216 13:02:55.654421 2750 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:02:55.654635 kubelet[2750]: I1216 13:02:55.654622 2750 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:02:55.654722 kubelet[2750]: I1216 13:02:55.654703 2750 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:02:55.654798 kubelet[2750]: I1216 13:02:55.654791 2750 policy_none.go:49] "None policy: Start" Dec 16 13:02:55.654848 kubelet[2750]: I1216 13:02:55.654842 2750 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:02:55.654889 kubelet[2750]: I1216 13:02:55.654884 2750 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:02:55.655031 kubelet[2750]: I1216 13:02:55.655018 2750 state_mem.go:75] "Updated machine memory state" Dec 16 13:02:55.658922 kubelet[2750]: E1216 13:02:55.658893 2750 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:02:55.659421 kubelet[2750]: I1216 13:02:55.659050 2750 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:02:55.659421 kubelet[2750]: I1216 13:02:55.659064 2750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:02:55.659421 kubelet[2750]: I1216 13:02:55.659248 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:02:55.663831 kubelet[2750]: E1216 13:02:55.663806 2750 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:02:55.703901 kubelet[2750]: I1216 13:02:55.703867 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.704541 kubelet[2750]: I1216 13:02:55.704253 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.704988 kubelet[2750]: I1216 13:02:55.704384 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.713331 kubelet[2750]: E1216 13:02:55.713279 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-2-e3531eb256\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.713852 kubelet[2750]: E1216 13:02:55.713797 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.714217 kubelet[2750]: E1216 13:02:55.714177 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.767516 kubelet[2750]: I1216 13:02:55.767183 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.779375 kubelet[2750]: I1216 13:02:55.779340 2750 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.779499 kubelet[2750]: I1216 13:02:55.779430 2750 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.803369 kubelet[2750]: I1216 13:02:55.803225 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.803369 kubelet[2750]: I1216 13:02:55.803265 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.803369 kubelet[2750]: I1216 13:02:55.803282 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.803369 kubelet[2750]: I1216 13:02:55.803300 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.803369 kubelet[2750]: I1216 13:02:55.803314 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.804009 kubelet[2750]: I1216 13:02:55.803329 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12c4acd8ff3261acde67fc24c5e9ac61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-2-e3531eb256\" (UID: \"12c4acd8ff3261acde67fc24c5e9ac61\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.804009 kubelet[2750]: I1216 13:02:55.803344 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.804009 kubelet[2750]: I1216 13:02:55.803892 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d061f4c845943bfa636a9cf2d2f6cc4a-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" (UID: \"d061f4c845943bfa636a9cf2d2f6cc4a\") " pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.804009 kubelet[2750]: I1216 13:02:55.803915 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49ac469dd2c2da756bc4adea2f26403c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-2-e3531eb256\" (UID: \"49ac469dd2c2da756bc4adea2f26403c\") " pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:55.938250 sudo[2785]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:02:55.939214 sudo[2785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:02:56.333223 sudo[2785]: pam_unix(sudo:session): session closed for user root Dec 16 13:02:56.576711 kubelet[2750]: I1216 13:02:56.575458 2750 apiserver.go:52] "Watching apiserver" Dec 16 13:02:56.599515 kubelet[2750]: I1216 13:02:56.599154 2750 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:02:56.642581 kubelet[2750]: I1216 13:02:56.642074 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:56.652479 kubelet[2750]: E1216 13:02:56.652354 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-2-e3531eb256\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" Dec 16 13:02:56.683584 kubelet[2750]: I1216 13:02:56.682391 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-2-e3531eb256" podStartSLOduration=3.682369623 podStartE2EDuration="3.682369623s" podCreationTimestamp="2025-12-16 13:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:02:56.669136865 +0000 UTC m=+1.167077217" watchObservedRunningTime="2025-12-16 13:02:56.682369623 +0000 UTC m=+1.180309975" Dec 16 13:02:56.683584 kubelet[2750]: I1216 13:02:56.683428 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-2-e3531eb256" podStartSLOduration=3.6834148129999997 podStartE2EDuration="3.683414813s" podCreationTimestamp="2025-12-16 13:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:02:56.682997851 +0000 UTC m=+1.180938243" watchObservedRunningTime="2025-12-16 13:02:56.683414813 +0000 UTC m=+1.181355185" Dec 16 13:02:58.055050 sudo[1813]: pam_unix(sudo:session): session closed for user root Dec 16 13:02:58.223712 sshd[1812]: Connection closed by 139.178.89.65 port 60064 Dec 16 13:02:58.226861 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:58.234409 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:02:58.235746 systemd[1]: sshd@6-77.42.28.57:22-139.178.89.65:60064.service: Deactivated successfully. Dec 16 13:02:58.239237 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:02:58.239514 systemd[1]: session-7.scope: Consumed 6.977s CPU time, 215.9M memory peak. Dec 16 13:02:58.243138 systemd-logind[1560]: Removed session 7. Dec 16 13:02:58.837786 update_engine[1565]: I20251216 13:02:58.837688 1565 update_attempter.cc:509] Updating boot flags... Dec 16 13:02:59.884316 kubelet[2750]: I1216 13:02:59.884265 2750 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:02:59.884831 containerd[1595]: time="2025-12-16T13:02:59.884768013Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:02:59.885522 kubelet[2750]: I1216 13:02:59.885489 2750 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:03:00.398671 kubelet[2750]: I1216 13:03:00.398446 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" podStartSLOduration=7.398408 podStartE2EDuration="7.398408s" podCreationTimestamp="2025-12-16 13:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:02:56.697858342 +0000 UTC m=+1.195798705" watchObservedRunningTime="2025-12-16 13:03:00.398408 +0000 UTC m=+4.896348352" Dec 16 13:03:00.416013 systemd[1]: Created slice kubepods-besteffort-pode5ba17e4_f948_403e_ab80_c3218def22c6.slice - libcontainer container kubepods-besteffort-pode5ba17e4_f948_403e_ab80_c3218def22c6.slice. Dec 16 13:03:00.436913 systemd[1]: Created slice kubepods-burstable-podda8d6600_f98f_438f_8268_388ae32c6ee2.slice - libcontainer container kubepods-burstable-podda8d6600_f98f_438f_8268_388ae32c6ee2.slice. Dec 16 13:03:00.439110 kubelet[2750]: I1216 13:03:00.438895 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5ba17e4-f948-403e-ab80-c3218def22c6-kube-proxy\") pod \"kube-proxy-cg78q\" (UID: \"e5ba17e4-f948-403e-ab80-c3218def22c6\") " pod="kube-system/kube-proxy-cg78q" Dec 16 13:03:00.439343 kubelet[2750]: I1216 13:03:00.439238 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ba17e4-f948-403e-ab80-c3218def22c6-xtables-lock\") pod \"kube-proxy-cg78q\" (UID: \"e5ba17e4-f948-403e-ab80-c3218def22c6\") " pod="kube-system/kube-proxy-cg78q" Dec 16 13:03:00.439475 kubelet[2750]: I1216 13:03:00.439439 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ba17e4-f948-403e-ab80-c3218def22c6-lib-modules\") pod \"kube-proxy-cg78q\" (UID: \"e5ba17e4-f948-403e-ab80-c3218def22c6\") " pod="kube-system/kube-proxy-cg78q" Dec 16 13:03:00.439823 kubelet[2750]: I1216 13:03:00.439599 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcft\" (UniqueName: \"kubernetes.io/projected/e5ba17e4-f948-403e-ab80-c3218def22c6-kube-api-access-brcft\") pod \"kube-proxy-cg78q\" (UID: \"e5ba17e4-f948-403e-ab80-c3218def22c6\") " pod="kube-system/kube-proxy-cg78q" Dec 16 13:03:00.540487 kubelet[2750]: I1216 13:03:00.540417 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-hostproc\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540487 kubelet[2750]: I1216 13:03:00.540481 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-cgroup\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540511 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-lib-modules\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540551 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-etc-cni-netd\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540588 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-xtables-lock\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540603 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da8d6600-f98f-438f-8268-388ae32c6ee2-clustermesh-secrets\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540619 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-net\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.540818 kubelet[2750]: I1216 13:03:00.540635 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-kernel\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540652 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-config-path\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540680 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vltp\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540738 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-run\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540773 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cni-path\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540805 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-hubble-tls\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.541079 kubelet[2750]: I1216 13:03:00.540842 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-bpf-maps\") pod \"cilium-245k6\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " pod="kube-system/cilium-245k6" Dec 16 13:03:00.546658 kubelet[2750]: E1216 13:03:00.546631 2750 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:03:00.546658 kubelet[2750]: E1216 13:03:00.546655 2750 projected.go:194] Error preparing data for projected volume kube-api-access-brcft for pod kube-system/kube-proxy-cg78q: configmap "kube-root-ca.crt" not found Dec 16 13:03:00.546753 kubelet[2750]: E1216 13:03:00.546708 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5ba17e4-f948-403e-ab80-c3218def22c6-kube-api-access-brcft podName:e5ba17e4-f948-403e-ab80-c3218def22c6 nodeName:}" failed. No retries permitted until 2025-12-16 13:03:01.046690065 +0000 UTC m=+5.544630417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-brcft" (UniqueName: "kubernetes.io/projected/e5ba17e4-f948-403e-ab80-c3218def22c6-kube-api-access-brcft") pod "kube-proxy-cg78q" (UID: "e5ba17e4-f948-403e-ab80-c3218def22c6") : configmap "kube-root-ca.crt" not found Dec 16 13:03:00.652627 kubelet[2750]: E1216 13:03:00.651217 2750 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:03:00.652627 kubelet[2750]: E1216 13:03:00.651362 2750 projected.go:194] Error preparing data for projected volume kube-api-access-9vltp for pod kube-system/cilium-245k6: configmap "kube-root-ca.crt" not found Dec 16 13:03:00.653080 kubelet[2750]: E1216 13:03:00.652770 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp podName:da8d6600-f98f-438f-8268-388ae32c6ee2 nodeName:}" failed. No retries permitted until 2025-12-16 13:03:01.152748569 +0000 UTC m=+5.650688941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9vltp" (UniqueName: "kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp") pod "cilium-245k6" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2") : configmap "kube-root-ca.crt" not found Dec 16 13:03:01.074534 systemd[1]: Created slice kubepods-besteffort-podc1d2016c_3ccc_4917_afdb_d3fc7be17e33.slice - libcontainer container kubepods-besteffort-podc1d2016c_3ccc_4917_afdb_d3fc7be17e33.slice. Dec 16 13:03:01.147253 kubelet[2750]: I1216 13:03:01.147201 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4l84\" (UniqueName: \"kubernetes.io/projected/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-kube-api-access-g4l84\") pod \"cilium-operator-6c4d7847fc-n44hr\" (UID: \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\") " pod="kube-system/cilium-operator-6c4d7847fc-n44hr" Dec 16 13:03:01.147253 kubelet[2750]: I1216 13:03:01.147283 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-n44hr\" (UID: \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\") " pod="kube-system/cilium-operator-6c4d7847fc-n44hr" Dec 16 13:03:01.325509 containerd[1595]: time="2025-12-16T13:03:01.325344369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg78q,Uid:e5ba17e4-f948-403e-ab80-c3218def22c6,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:01.342790 containerd[1595]: time="2025-12-16T13:03:01.342455651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-245k6,Uid:da8d6600-f98f-438f-8268-388ae32c6ee2,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:01.346227 containerd[1595]: time="2025-12-16T13:03:01.346197730Z" level=info msg="connecting to shim 32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4" address="unix:///run/containerd/s/d08de7f661ded103d5ffac3a0024cedcfef04529d4fe5c1b1a2e7cb34e436f96" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:01.360952 containerd[1595]: time="2025-12-16T13:03:01.360880137Z" level=info msg="connecting to shim a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:01.376745 systemd[1]: Started cri-containerd-32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4.scope - libcontainer container 32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4. Dec 16 13:03:01.382353 containerd[1595]: time="2025-12-16T13:03:01.382274211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n44hr,Uid:c1d2016c-3ccc-4917-afdb-d3fc7be17e33,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:01.384282 systemd[1]: Started cri-containerd-a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502.scope - libcontainer container a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502. Dec 16 13:03:01.421373 containerd[1595]: time="2025-12-16T13:03:01.421324711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg78q,Uid:e5ba17e4-f948-403e-ab80-c3218def22c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4\"" Dec 16 13:03:01.423475 containerd[1595]: time="2025-12-16T13:03:01.423272925Z" level=info msg="connecting to shim 1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99" address="unix:///run/containerd/s/523d1f7e30d85d172bfee3501da6d38440039ee4d9d73bc82fc79a22db183afe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:01.424264 containerd[1595]: time="2025-12-16T13:03:01.424243856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-245k6,Uid:da8d6600-f98f-438f-8268-388ae32c6ee2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\"" Dec 16 13:03:01.428476 containerd[1595]: time="2025-12-16T13:03:01.428426931Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:03:01.428939 containerd[1595]: time="2025-12-16T13:03:01.428895791Z" level=info msg="CreateContainer within sandbox \"32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:03:01.441905 containerd[1595]: time="2025-12-16T13:03:01.441825952Z" level=info msg="Container ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:01.449737 systemd[1]: Started cri-containerd-1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99.scope - libcontainer container 1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99. Dec 16 13:03:01.453773 containerd[1595]: time="2025-12-16T13:03:01.453718997Z" level=info msg="CreateContainer within sandbox \"32c35a4b41a3b3b6f58e5ae53631050a4885ac0156a19254a56de5930765adb4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1\"" Dec 16 13:03:01.454974 containerd[1595]: time="2025-12-16T13:03:01.454762264Z" level=info msg="StartContainer for \"ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1\"" Dec 16 13:03:01.458180 containerd[1595]: time="2025-12-16T13:03:01.458120862Z" level=info msg="connecting to shim ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1" address="unix:///run/containerd/s/d08de7f661ded103d5ffac3a0024cedcfef04529d4fe5c1b1a2e7cb34e436f96" protocol=ttrpc version=3 Dec 16 13:03:01.484722 systemd[1]: Started cri-containerd-ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1.scope - libcontainer container ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1. Dec 16 13:03:01.524684 containerd[1595]: time="2025-12-16T13:03:01.524528641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n44hr,Uid:c1d2016c-3ccc-4917-afdb-d3fc7be17e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\"" Dec 16 13:03:01.569532 containerd[1595]: time="2025-12-16T13:03:01.569486861Z" level=info msg="StartContainer for \"ec00685e505ba39017d7be80f09fe578c5511c01c2369ca1744a9c6ea4981ad1\" returns successfully" Dec 16 13:03:01.686365 kubelet[2750]: I1216 13:03:01.686258 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg78q" podStartSLOduration=1.6862422769999998 podStartE2EDuration="1.686242277s" podCreationTimestamp="2025-12-16 13:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:03:01.675712057 +0000 UTC m=+6.173652409" watchObservedRunningTime="2025-12-16 13:03:01.686242277 +0000 UTC m=+6.184182629" Dec 16 13:03:06.433498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003677640.mount: Deactivated successfully. Dec 16 13:03:07.832815 containerd[1595]: time="2025-12-16T13:03:07.832758444Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:07.834084 containerd[1595]: time="2025-12-16T13:03:07.834053427Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:03:07.834660 containerd[1595]: time="2025-12-16T13:03:07.834624105Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:07.836139 containerd[1595]: time="2025-12-16T13:03:07.835757354Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.407276299s" Dec 16 13:03:07.836139 containerd[1595]: time="2025-12-16T13:03:07.835786975Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:03:07.837320 containerd[1595]: time="2025-12-16T13:03:07.837281180Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:03:07.842237 containerd[1595]: time="2025-12-16T13:03:07.841836231Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:03:07.859478 containerd[1595]: time="2025-12-16T13:03:07.859231908Z" level=info msg="Container adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:07.862666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486042424.mount: Deactivated successfully. Dec 16 13:03:07.869292 containerd[1595]: time="2025-12-16T13:03:07.869268655Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\"" Dec 16 13:03:07.869653 containerd[1595]: time="2025-12-16T13:03:07.869628909Z" level=info msg="StartContainer for \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\"" Dec 16 13:03:07.871935 containerd[1595]: time="2025-12-16T13:03:07.871725075Z" level=info msg="connecting to shim adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" protocol=ttrpc version=3 Dec 16 13:03:07.901693 systemd[1]: Started cri-containerd-adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291.scope - libcontainer container adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291. Dec 16 13:03:07.932041 containerd[1595]: time="2025-12-16T13:03:07.932001202Z" level=info msg="StartContainer for \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" returns successfully" Dec 16 13:03:07.944085 systemd[1]: cri-containerd-adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291.scope: Deactivated successfully. Dec 16 13:03:07.944654 systemd[1]: cri-containerd-adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291.scope: Consumed 18ms CPU time, 6.5M memory peak, 178K read from disk, 1.3M written to disk. Dec 16 13:03:07.964101 containerd[1595]: time="2025-12-16T13:03:07.964052178Z" level=info msg="received container exit event container_id:\"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" id:\"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" pid:3193 exited_at:{seconds:1765890187 nanos:947672459}" Dec 16 13:03:07.986653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291-rootfs.mount: Deactivated successfully. Dec 16 13:03:08.690524 containerd[1595]: time="2025-12-16T13:03:08.690435095Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:03:08.706363 containerd[1595]: time="2025-12-16T13:03:08.706318413Z" level=info msg="Container 52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:08.717974 containerd[1595]: time="2025-12-16T13:03:08.717781539Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\"" Dec 16 13:03:08.721361 containerd[1595]: time="2025-12-16T13:03:08.721325021Z" level=info msg="StartContainer for \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\"" Dec 16 13:03:08.726333 containerd[1595]: time="2025-12-16T13:03:08.726266012Z" level=info msg="connecting to shim 52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" protocol=ttrpc version=3 Dec 16 13:03:08.779680 systemd[1]: Started cri-containerd-52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa.scope - libcontainer container 52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa. Dec 16 13:03:08.828063 containerd[1595]: time="2025-12-16T13:03:08.828005725Z" level=info msg="StartContainer for \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" returns successfully" Dec 16 13:03:08.841286 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:03:08.841520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:08.841700 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:08.844915 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:08.847085 systemd[1]: cri-containerd-52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa.scope: Deactivated successfully. Dec 16 13:03:08.848806 containerd[1595]: time="2025-12-16T13:03:08.848523112Z" level=info msg="received container exit event container_id:\"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" id:\"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" pid:3238 exited_at:{seconds:1765890188 nanos:846995578}" Dec 16 13:03:08.860216 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:03:08.871054 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:08.877244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa-rootfs.mount: Deactivated successfully. Dec 16 13:03:09.693037 containerd[1595]: time="2025-12-16T13:03:09.692592986Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:03:09.710736 containerd[1595]: time="2025-12-16T13:03:09.710700859Z" level=info msg="Container e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:09.723349 containerd[1595]: time="2025-12-16T13:03:09.723305578Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\"" Dec 16 13:03:09.724573 containerd[1595]: time="2025-12-16T13:03:09.724177368Z" level=info msg="StartContainer for \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\"" Dec 16 13:03:09.726614 containerd[1595]: time="2025-12-16T13:03:09.726145416Z" level=info msg="connecting to shim e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" protocol=ttrpc version=3 Dec 16 13:03:09.750865 systemd[1]: Started cri-containerd-e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e.scope - libcontainer container e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e. Dec 16 13:03:09.839289 containerd[1595]: time="2025-12-16T13:03:09.839135549Z" level=info msg="StartContainer for \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" returns successfully" Dec 16 13:03:09.839600 systemd[1]: cri-containerd-e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e.scope: Deactivated successfully. Dec 16 13:03:09.844404 containerd[1595]: time="2025-12-16T13:03:09.844194646Z" level=info msg="received container exit event container_id:\"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" id:\"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" pid:3285 exited_at:{seconds:1765890189 nanos:843951861}" Dec 16 13:03:09.880323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e-rootfs.mount: Deactivated successfully. Dec 16 13:03:09.894143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364313508.mount: Deactivated successfully. Dec 16 13:03:10.288521 containerd[1595]: time="2025-12-16T13:03:10.288461846Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:10.289589 containerd[1595]: time="2025-12-16T13:03:10.289412276Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:03:10.290339 containerd[1595]: time="2025-12-16T13:03:10.290290510Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:10.291818 containerd[1595]: time="2025-12-16T13:03:10.291404092Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.454095877s" Dec 16 13:03:10.291818 containerd[1595]: time="2025-12-16T13:03:10.291440875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:03:10.294624 containerd[1595]: time="2025-12-16T13:03:10.294588498Z" level=info msg="CreateContainer within sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:03:10.303431 containerd[1595]: time="2025-12-16T13:03:10.303377999Z" level=info msg="Container 5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:10.320066 containerd[1595]: time="2025-12-16T13:03:10.319996571Z" level=info msg="CreateContainer within sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\"" Dec 16 13:03:10.320665 containerd[1595]: time="2025-12-16T13:03:10.320631018Z" level=info msg="StartContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\"" Dec 16 13:03:10.321403 containerd[1595]: time="2025-12-16T13:03:10.321225056Z" level=info msg="connecting to shim 5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764" address="unix:///run/containerd/s/523d1f7e30d85d172bfee3501da6d38440039ee4d9d73bc82fc79a22db183afe" protocol=ttrpc version=3 Dec 16 13:03:10.340328 systemd[1]: Started cri-containerd-5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764.scope - libcontainer container 5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764. Dec 16 13:03:10.381292 containerd[1595]: time="2025-12-16T13:03:10.381257276Z" level=info msg="StartContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" returns successfully" Dec 16 13:03:10.708169 containerd[1595]: time="2025-12-16T13:03:10.707734353Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:03:10.719261 containerd[1595]: time="2025-12-16T13:03:10.719182474Z" level=info msg="Container dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:10.726819 containerd[1595]: time="2025-12-16T13:03:10.726717556Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\"" Dec 16 13:03:10.727324 containerd[1595]: time="2025-12-16T13:03:10.727282324Z" level=info msg="StartContainer for \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\"" Dec 16 13:03:10.728832 containerd[1595]: time="2025-12-16T13:03:10.728783372Z" level=info msg="connecting to shim dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" protocol=ttrpc version=3 Dec 16 13:03:10.760721 systemd[1]: Started cri-containerd-dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b.scope - libcontainer container dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b. Dec 16 13:03:10.803594 kubelet[2750]: I1216 13:03:10.802648 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-n44hr" podStartSLOduration=1.037722035 podStartE2EDuration="9.802628885s" podCreationTimestamp="2025-12-16 13:03:01 +0000 UTC" firstStartedPulling="2025-12-16 13:03:01.52728462 +0000 UTC m=+6.025224972" lastFinishedPulling="2025-12-16 13:03:10.29219148 +0000 UTC m=+14.790131822" observedRunningTime="2025-12-16 13:03:10.716282143 +0000 UTC m=+15.214222495" watchObservedRunningTime="2025-12-16 13:03:10.802628885 +0000 UTC m=+15.300569247" Dec 16 13:03:10.849449 systemd[1]: cri-containerd-dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b.scope: Deactivated successfully. Dec 16 13:03:10.852585 containerd[1595]: time="2025-12-16T13:03:10.851962649Z" level=info msg="received container exit event container_id:\"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" id:\"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" pid:3374 exited_at:{seconds:1765890190 nanos:850049173}" Dec 16 13:03:10.862686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217557040.mount: Deactivated successfully. Dec 16 13:03:10.868056 containerd[1595]: time="2025-12-16T13:03:10.868032565Z" level=info msg="StartContainer for \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" returns successfully" Dec 16 13:03:10.892707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b-rootfs.mount: Deactivated successfully. Dec 16 13:03:11.709642 containerd[1595]: time="2025-12-16T13:03:11.709553087Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:03:11.723908 containerd[1595]: time="2025-12-16T13:03:11.721222544Z" level=info msg="Container c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:11.732796 containerd[1595]: time="2025-12-16T13:03:11.732730925Z" level=info msg="CreateContainer within sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\"" Dec 16 13:03:11.734098 containerd[1595]: time="2025-12-16T13:03:11.733942884Z" level=info msg="StartContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\"" Dec 16 13:03:11.736609 containerd[1595]: time="2025-12-16T13:03:11.736510093Z" level=info msg="connecting to shim c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3" address="unix:///run/containerd/s/4976139ee4c0ee643fc14c0bf282a5f100f8939d05bcc46c5f0973f4609db3f1" protocol=ttrpc version=3 Dec 16 13:03:11.768739 systemd[1]: Started cri-containerd-c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3.scope - libcontainer container c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3. Dec 16 13:03:11.813033 containerd[1595]: time="2025-12-16T13:03:11.812898731Z" level=info msg="StartContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" returns successfully" Dec 16 13:03:11.941592 kubelet[2750]: I1216 13:03:11.941394 2750 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:03:11.978690 systemd[1]: Created slice kubepods-burstable-pod188e0486_1312_4a70_9e74_657cd0b042e8.slice - libcontainer container kubepods-burstable-pod188e0486_1312_4a70_9e74_657cd0b042e8.slice. Dec 16 13:03:11.986121 systemd[1]: Created slice kubepods-burstable-pod8c39a6f1_f57f_4760_8893_1a3b77023d70.slice - libcontainer container kubepods-burstable-pod8c39a6f1_f57f_4760_8893_1a3b77023d70.slice. Dec 16 13:03:12.032205 kubelet[2750]: I1216 13:03:12.032159 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5rl\" (UniqueName: \"kubernetes.io/projected/8c39a6f1-f57f-4760-8893-1a3b77023d70-kube-api-access-gk5rl\") pod \"coredns-674b8bbfcf-5vq8c\" (UID: \"8c39a6f1-f57f-4760-8893-1a3b77023d70\") " pod="kube-system/coredns-674b8bbfcf-5vq8c" Dec 16 13:03:12.032451 kubelet[2750]: I1216 13:03:12.032227 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/188e0486-1312-4a70-9e74-657cd0b042e8-config-volume\") pod \"coredns-674b8bbfcf-d4rqk\" (UID: \"188e0486-1312-4a70-9e74-657cd0b042e8\") " pod="kube-system/coredns-674b8bbfcf-d4rqk" Dec 16 13:03:12.032451 kubelet[2750]: I1216 13:03:12.032249 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c39a6f1-f57f-4760-8893-1a3b77023d70-config-volume\") pod \"coredns-674b8bbfcf-5vq8c\" (UID: \"8c39a6f1-f57f-4760-8893-1a3b77023d70\") " pod="kube-system/coredns-674b8bbfcf-5vq8c" Dec 16 13:03:12.032451 kubelet[2750]: I1216 13:03:12.032272 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lc67\" (UniqueName: \"kubernetes.io/projected/188e0486-1312-4a70-9e74-657cd0b042e8-kube-api-access-8lc67\") pod \"coredns-674b8bbfcf-d4rqk\" (UID: \"188e0486-1312-4a70-9e74-657cd0b042e8\") " pod="kube-system/coredns-674b8bbfcf-d4rqk" Dec 16 13:03:12.285485 containerd[1595]: time="2025-12-16T13:03:12.285100126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4rqk,Uid:188e0486-1312-4a70-9e74-657cd0b042e8,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:12.290296 containerd[1595]: time="2025-12-16T13:03:12.290044654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5vq8c,Uid:8c39a6f1-f57f-4760-8893-1a3b77023d70,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:14.081315 systemd-networkd[1443]: cilium_host: Link UP Dec 16 13:03:14.081510 systemd-networkd[1443]: cilium_net: Link UP Dec 16 13:03:14.081778 systemd-networkd[1443]: cilium_net: Gained carrier Dec 16 13:03:14.081968 systemd-networkd[1443]: cilium_host: Gained carrier Dec 16 13:03:14.217916 systemd-networkd[1443]: cilium_vxlan: Link UP Dec 16 13:03:14.217929 systemd-networkd[1443]: cilium_vxlan: Gained carrier Dec 16 13:03:14.784208 kernel: NET: Registered PF_ALG protocol family Dec 16 13:03:14.791787 systemd-networkd[1443]: cilium_net: Gained IPv6LL Dec 16 13:03:15.048794 systemd-networkd[1443]: cilium_host: Gained IPv6LL Dec 16 13:03:15.442834 systemd-networkd[1443]: lxc_health: Link UP Dec 16 13:03:15.443101 systemd-networkd[1443]: lxc_health: Gained carrier Dec 16 13:03:15.843614 kernel: eth0: renamed from tmp6124f Dec 16 13:03:15.844780 systemd-networkd[1443]: lxcefffd9fff9c5: Link UP Dec 16 13:03:15.848320 systemd-networkd[1443]: lxcefffd9fff9c5: Gained carrier Dec 16 13:03:15.862877 systemd-networkd[1443]: lxcf80b716fc011: Link UP Dec 16 13:03:15.868618 kernel: eth0: renamed from tmp756f5 Dec 16 13:03:15.869117 systemd-networkd[1443]: lxcf80b716fc011: Gained carrier Dec 16 13:03:16.072742 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Dec 16 13:03:16.903805 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 16 13:03:17.378272 kubelet[2750]: I1216 13:03:17.378199 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-245k6" podStartSLOduration=10.968586616 podStartE2EDuration="17.378168039s" podCreationTimestamp="2025-12-16 13:03:00 +0000 UTC" firstStartedPulling="2025-12-16 13:03:01.427387482 +0000 UTC m=+5.925327824" lastFinishedPulling="2025-12-16 13:03:07.836968905 +0000 UTC m=+12.334909247" observedRunningTime="2025-12-16 13:03:12.742955175 +0000 UTC m=+17.240895589" watchObservedRunningTime="2025-12-16 13:03:17.378168039 +0000 UTC m=+21.876108391" Dec 16 13:03:17.671764 systemd-networkd[1443]: lxcefffd9fff9c5: Gained IPv6LL Dec 16 13:03:17.673023 systemd-networkd[1443]: lxcf80b716fc011: Gained IPv6LL Dec 16 13:03:19.024096 containerd[1595]: time="2025-12-16T13:03:19.024050420Z" level=info msg="connecting to shim 6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298" address="unix:///run/containerd/s/6fb2923211df7fe8cfe1d92118846d7ffb94311bff96d261b6ee37abfc0e8f06" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:19.059877 systemd[1]: Started cri-containerd-6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298.scope - libcontainer container 6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298. Dec 16 13:03:19.080046 containerd[1595]: time="2025-12-16T13:03:19.079998556Z" level=info msg="connecting to shim 756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e" address="unix:///run/containerd/s/55486668ab451c51ce84406694c27f10f31d60149b6d6e5c46ea4930536f3c8b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:19.116370 systemd[1]: Started cri-containerd-756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e.scope - libcontainer container 756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e. Dec 16 13:03:19.158615 containerd[1595]: time="2025-12-16T13:03:19.158468219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5vq8c,Uid:8c39a6f1-f57f-4760-8893-1a3b77023d70,Namespace:kube-system,Attempt:0,} returns sandbox id \"6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298\"" Dec 16 13:03:19.176048 containerd[1595]: time="2025-12-16T13:03:19.175745294Z" level=info msg="CreateContainer within sandbox \"6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:03:19.203653 containerd[1595]: time="2025-12-16T13:03:19.202637418Z" level=info msg="Container 4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:19.208588 containerd[1595]: time="2025-12-16T13:03:19.208506246Z" level=info msg="CreateContainer within sandbox \"6124f4da806ea2441006fa057277ab799d63b9a9a70a7c40fcc0bd2c3dddf298\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4\"" Dec 16 13:03:19.210181 containerd[1595]: time="2025-12-16T13:03:19.209592578Z" level=info msg="StartContainer for \"4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4\"" Dec 16 13:03:19.211598 containerd[1595]: time="2025-12-16T13:03:19.210700372Z" level=info msg="connecting to shim 4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4" address="unix:///run/containerd/s/6fb2923211df7fe8cfe1d92118846d7ffb94311bff96d261b6ee37abfc0e8f06" protocol=ttrpc version=3 Dec 16 13:03:19.219372 containerd[1595]: time="2025-12-16T13:03:19.219332452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4rqk,Uid:188e0486-1312-4a70-9e74-657cd0b042e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e\"" Dec 16 13:03:19.226470 containerd[1595]: time="2025-12-16T13:03:19.226312590Z" level=info msg="CreateContainer within sandbox \"756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:03:19.239345 containerd[1595]: time="2025-12-16T13:03:19.239288324Z" level=info msg="Container bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:19.240117 systemd[1]: Started cri-containerd-4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4.scope - libcontainer container 4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4. Dec 16 13:03:19.248124 containerd[1595]: time="2025-12-16T13:03:19.248091190Z" level=info msg="CreateContainer within sandbox \"756f5bd8fb1e450b0cd25596445b5416929bf37c7147d468492db532dc6cb66e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847\"" Dec 16 13:03:19.249567 containerd[1595]: time="2025-12-16T13:03:19.249537809Z" level=info msg="StartContainer for \"bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847\"" Dec 16 13:03:19.250822 containerd[1595]: time="2025-12-16T13:03:19.250789214Z" level=info msg="connecting to shim bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847" address="unix:///run/containerd/s/55486668ab451c51ce84406694c27f10f31d60149b6d6e5c46ea4930536f3c8b" protocol=ttrpc version=3 Dec 16 13:03:19.270782 systemd[1]: Started cri-containerd-bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847.scope - libcontainer container bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847. Dec 16 13:03:19.289294 containerd[1595]: time="2025-12-16T13:03:19.288339435Z" level=info msg="StartContainer for \"4671085d2cc3e6f251f7b340930c69e0697b18f97407560e351fe5bc2e5608f4\" returns successfully" Dec 16 13:03:19.308670 containerd[1595]: time="2025-12-16T13:03:19.308616167Z" level=info msg="StartContainer for \"bd4347cfbb06a3c34328c4f5e8f99e098557fa65572d40b2cb0bac31b2395847\" returns successfully" Dec 16 13:03:19.794606 kubelet[2750]: I1216 13:03:19.793988 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d4rqk" podStartSLOduration=18.793972936 podStartE2EDuration="18.793972936s" podCreationTimestamp="2025-12-16 13:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:03:19.76899545 +0000 UTC m=+24.266935802" watchObservedRunningTime="2025-12-16 13:03:19.793972936 +0000 UTC m=+24.291913289" Dec 16 13:03:19.812436 kubelet[2750]: I1216 13:03:19.812376 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5vq8c" podStartSLOduration=18.812358828 podStartE2EDuration="18.812358828s" podCreationTimestamp="2025-12-16 13:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:03:19.810552273 +0000 UTC m=+24.308492626" watchObservedRunningTime="2025-12-16 13:03:19.812358828 +0000 UTC m=+24.310299180" Dec 16 13:03:20.014506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288990237.mount: Deactivated successfully. Dec 16 13:04:23.140632 systemd[1]: Started sshd@7-77.42.28.57:22-139.178.89.65:44644.service - OpenSSH per-connection server daemon (139.178.89.65:44644). Dec 16 13:04:24.275771 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 44644 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:24.279870 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:24.292233 systemd-logind[1560]: New session 8 of user core. Dec 16 13:04:24.300866 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:04:25.747853 sshd[4095]: Connection closed by 139.178.89.65 port 44644 Dec 16 13:04:25.748497 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:25.753160 systemd[1]: sshd@7-77.42.28.57:22-139.178.89.65:44644.service: Deactivated successfully. Dec 16 13:04:25.756951 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:04:25.758998 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:04:25.761873 systemd-logind[1560]: Removed session 8. Dec 16 13:04:30.903301 systemd[1]: Started sshd@8-77.42.28.57:22-139.178.89.65:52014.service - OpenSSH per-connection server daemon (139.178.89.65:52014). Dec 16 13:04:31.952467 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 52014 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:31.956236 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:31.965900 systemd-logind[1560]: New session 9 of user core. Dec 16 13:04:31.974834 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:04:32.753901 sshd[4111]: Connection closed by 139.178.89.65 port 52014 Dec 16 13:04:32.754667 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:32.762661 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:04:32.763285 systemd[1]: sshd@8-77.42.28.57:22-139.178.89.65:52014.service: Deactivated successfully. Dec 16 13:04:32.765337 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:04:32.767757 systemd-logind[1560]: Removed session 9. Dec 16 13:04:37.957763 systemd[1]: Started sshd@9-77.42.28.57:22-139.178.89.65:52016.service - OpenSSH per-connection server daemon (139.178.89.65:52016). Dec 16 13:04:39.069461 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 52016 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:39.070814 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:39.077855 systemd-logind[1560]: New session 10 of user core. Dec 16 13:04:39.082854 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:04:39.924897 sshd[4128]: Connection closed by 139.178.89.65 port 52016 Dec 16 13:04:39.925881 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:39.931909 systemd[1]: sshd@9-77.42.28.57:22-139.178.89.65:52016.service: Deactivated successfully. Dec 16 13:04:39.935125 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:04:39.939371 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:04:39.941773 systemd-logind[1560]: Removed session 10. Dec 16 13:04:40.110835 systemd[1]: Started sshd@10-77.42.28.57:22-139.178.89.65:52032.service - OpenSSH per-connection server daemon (139.178.89.65:52032). Dec 16 13:04:41.207495 sshd[4140]: Accepted publickey for core from 139.178.89.65 port 52032 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:41.208777 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:41.213474 systemd-logind[1560]: New session 11 of user core. Dec 16 13:04:41.220791 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:04:42.077166 sshd[4143]: Connection closed by 139.178.89.65 port 52032 Dec 16 13:04:42.079093 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:42.083200 systemd[1]: sshd@10-77.42.28.57:22-139.178.89.65:52032.service: Deactivated successfully. Dec 16 13:04:42.085666 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:04:42.086732 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:04:42.088021 systemd-logind[1560]: Removed session 11. Dec 16 13:04:42.230344 systemd[1]: Started sshd@11-77.42.28.57:22-139.178.89.65:48614.service - OpenSSH per-connection server daemon (139.178.89.65:48614). Dec 16 13:04:43.229614 sshd[4153]: Accepted publickey for core from 139.178.89.65 port 48614 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:43.232144 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:43.239662 systemd-logind[1560]: New session 12 of user core. Dec 16 13:04:43.247920 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:04:44.019065 sshd[4156]: Connection closed by 139.178.89.65 port 48614 Dec 16 13:04:44.019975 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:44.026765 systemd[1]: sshd@11-77.42.28.57:22-139.178.89.65:48614.service: Deactivated successfully. Dec 16 13:04:44.031490 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:04:44.034432 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:04:44.038542 systemd-logind[1560]: Removed session 12. Dec 16 13:04:49.227785 systemd[1]: Started sshd@12-77.42.28.57:22-139.178.89.65:48616.service - OpenSSH per-connection server daemon (139.178.89.65:48616). Dec 16 13:04:50.346352 sshd[4167]: Accepted publickey for core from 139.178.89.65 port 48616 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:50.348438 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:50.356485 systemd-logind[1560]: New session 13 of user core. Dec 16 13:04:50.364836 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:04:51.209918 sshd[4170]: Connection closed by 139.178.89.65 port 48616 Dec 16 13:04:51.211845 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:51.217291 systemd[1]: sshd@12-77.42.28.57:22-139.178.89.65:48616.service: Deactivated successfully. Dec 16 13:04:51.222308 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:04:51.227470 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:04:51.230934 systemd-logind[1560]: Removed session 13. Dec 16 13:04:51.404513 systemd[1]: Started sshd@13-77.42.28.57:22-139.178.89.65:43676.service - OpenSSH per-connection server daemon (139.178.89.65:43676). Dec 16 13:04:52.513455 sshd[4182]: Accepted publickey for core from 139.178.89.65 port 43676 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:52.515911 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:52.524401 systemd-logind[1560]: New session 14 of user core. Dec 16 13:04:52.530812 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:04:53.559771 sshd[4185]: Connection closed by 139.178.89.65 port 43676 Dec 16 13:04:53.560919 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:53.567966 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:04:53.568537 systemd[1]: sshd@13-77.42.28.57:22-139.178.89.65:43676.service: Deactivated successfully. Dec 16 13:04:53.570749 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:04:53.572732 systemd-logind[1560]: Removed session 14. Dec 16 13:04:53.752600 systemd[1]: Started sshd@14-77.42.28.57:22-139.178.89.65:43682.service - OpenSSH per-connection server daemon (139.178.89.65:43682). Dec 16 13:04:54.879978 sshd[4200]: Accepted publickey for core from 139.178.89.65 port 43682 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:54.882064 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:54.888552 systemd-logind[1560]: New session 15 of user core. Dec 16 13:04:54.895850 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:04:56.313617 sshd[4203]: Connection closed by 139.178.89.65 port 43682 Dec 16 13:04:56.316688 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:56.324171 systemd[1]: sshd@14-77.42.28.57:22-139.178.89.65:43682.service: Deactivated successfully. Dec 16 13:04:56.329451 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:04:56.332768 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:04:56.335509 systemd-logind[1560]: Removed session 15. Dec 16 13:04:56.470664 systemd[1]: Started sshd@15-77.42.28.57:22-139.178.89.65:43692.service - OpenSSH per-connection server daemon (139.178.89.65:43692). Dec 16 13:04:57.476118 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 43692 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:57.477767 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:57.483502 systemd-logind[1560]: New session 16 of user core. Dec 16 13:04:57.488710 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:04:58.474418 sshd[4225]: Connection closed by 139.178.89.65 port 43692 Dec 16 13:04:58.474823 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:58.480338 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:04:58.480856 systemd[1]: sshd@15-77.42.28.57:22-139.178.89.65:43692.service: Deactivated successfully. Dec 16 13:04:58.484233 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:04:58.487487 systemd-logind[1560]: Removed session 16. Dec 16 13:04:58.684700 systemd[1]: Started sshd@16-77.42.28.57:22-139.178.89.65:43698.service - OpenSSH per-connection server daemon (139.178.89.65:43698). Dec 16 13:04:59.796118 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 43698 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:04:59.796840 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:59.804522 systemd-logind[1560]: New session 17 of user core. Dec 16 13:04:59.810810 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:05:00.650029 sshd[4238]: Connection closed by 139.178.89.65 port 43698 Dec 16 13:05:00.650772 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:00.654904 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:05:00.655756 systemd[1]: sshd@16-77.42.28.57:22-139.178.89.65:43698.service: Deactivated successfully. Dec 16 13:05:00.658105 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:05:00.659822 systemd-logind[1560]: Removed session 17. Dec 16 13:05:05.838741 systemd[1]: Started sshd@17-77.42.28.57:22-139.178.89.65:34538.service - OpenSSH per-connection server daemon (139.178.89.65:34538). Dec 16 13:05:06.941060 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 34538 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:05:06.944292 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:06.955538 systemd-logind[1560]: New session 18 of user core. Dec 16 13:05:06.966861 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:05:07.762535 sshd[4257]: Connection closed by 139.178.89.65 port 34538 Dec 16 13:05:07.763448 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:07.769440 systemd[1]: sshd@17-77.42.28.57:22-139.178.89.65:34538.service: Deactivated successfully. Dec 16 13:05:07.773866 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:05:07.776017 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:05:07.778290 systemd-logind[1560]: Removed session 18. Dec 16 13:05:07.980768 systemd[1]: Started sshd@18-77.42.28.57:22-139.178.89.65:34554.service - OpenSSH per-connection server daemon (139.178.89.65:34554). Dec 16 13:05:09.080513 sshd[4269]: Accepted publickey for core from 139.178.89.65 port 34554 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:05:09.082081 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:09.087418 systemd-logind[1560]: New session 19 of user core. Dec 16 13:05:09.098744 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:05:11.264649 containerd[1595]: time="2025-12-16T13:05:11.264053975Z" level=info msg="StopContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" with timeout 30 (s)" Dec 16 13:05:11.266598 containerd[1595]: time="2025-12-16T13:05:11.266541062Z" level=info msg="Stop container \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" with signal terminated" Dec 16 13:05:11.281078 systemd[1]: cri-containerd-5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764.scope: Deactivated successfully. Dec 16 13:05:11.283834 containerd[1595]: time="2025-12-16T13:05:11.283779109Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:05:11.285718 containerd[1595]: time="2025-12-16T13:05:11.285646893Z" level=info msg="received container exit event container_id:\"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" id:\"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" pid:3340 exited_at:{seconds:1765890311 nanos:283957876}" Dec 16 13:05:11.294184 containerd[1595]: time="2025-12-16T13:05:11.294157190Z" level=info msg="StopContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" with timeout 2 (s)" Dec 16 13:05:11.294972 containerd[1595]: time="2025-12-16T13:05:11.294903198Z" level=info msg="Stop container \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" with signal terminated" Dec 16 13:05:11.305208 systemd-networkd[1443]: lxc_health: Link DOWN Dec 16 13:05:11.305215 systemd-networkd[1443]: lxc_health: Lost carrier Dec 16 13:05:11.325530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764-rootfs.mount: Deactivated successfully. Dec 16 13:05:11.330961 containerd[1595]: time="2025-12-16T13:05:11.329722837Z" level=info msg="received container exit event container_id:\"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" id:\"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" pid:3411 exited_at:{seconds:1765890311 nanos:329393528}" Dec 16 13:05:11.330643 systemd[1]: cri-containerd-c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3.scope: Deactivated successfully. Dec 16 13:05:11.331100 systemd[1]: cri-containerd-c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3.scope: Consumed 6.197s CPU time, 194.1M memory peak, 72.8M read from disk, 13.3M written to disk. Dec 16 13:05:11.352423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3-rootfs.mount: Deactivated successfully. Dec 16 13:05:11.362700 containerd[1595]: time="2025-12-16T13:05:11.362663429Z" level=info msg="StopContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" returns successfully" Dec 16 13:05:11.363998 containerd[1595]: time="2025-12-16T13:05:11.363964073Z" level=info msg="StopPodSandbox for \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\"" Dec 16 13:05:11.364445 containerd[1595]: time="2025-12-16T13:05:11.364321877Z" level=info msg="StopContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" returns successfully" Dec 16 13:05:11.364804 containerd[1595]: time="2025-12-16T13:05:11.364753566Z" level=info msg="StopPodSandbox for \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\"" Dec 16 13:05:11.367541 containerd[1595]: time="2025-12-16T13:05:11.367516067Z" level=info msg="Container to stop \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.368145 containerd[1595]: time="2025-12-16T13:05:11.367715784Z" level=info msg="Container to stop \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.368145 containerd[1595]: time="2025-12-16T13:05:11.367735553Z" level=info msg="Container to stop \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.368145 containerd[1595]: time="2025-12-16T13:05:11.367759549Z" level=info msg="Container to stop \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.368145 containerd[1595]: time="2025-12-16T13:05:11.367768867Z" level=info msg="Container to stop \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.368145 containerd[1595]: time="2025-12-16T13:05:11.367523782Z" level=info msg="Container to stop \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:05:11.378124 systemd[1]: cri-containerd-1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99.scope: Deactivated successfully. Dec 16 13:05:11.379417 systemd[1]: cri-containerd-a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502.scope: Deactivated successfully. Dec 16 13:05:11.388616 containerd[1595]: time="2025-12-16T13:05:11.388263905Z" level=info msg="received sandbox exit event container_id:\"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" id:\"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" exit_status:137 exited_at:{seconds:1765890311 nanos:387820074}" monitor_name=podsandbox Dec 16 13:05:11.392847 containerd[1595]: time="2025-12-16T13:05:11.392797844Z" level=info msg="received sandbox exit event container_id:\"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" id:\"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" exit_status:137 exited_at:{seconds:1765890311 nanos:390087183}" monitor_name=podsandbox Dec 16 13:05:11.437751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502-rootfs.mount: Deactivated successfully. Dec 16 13:05:11.446495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99-rootfs.mount: Deactivated successfully. Dec 16 13:05:11.449008 containerd[1595]: time="2025-12-16T13:05:11.448895359Z" level=info msg="shim disconnected" id=a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502 namespace=k8s.io Dec 16 13:05:11.449008 containerd[1595]: time="2025-12-16T13:05:11.448932863Z" level=warning msg="cleaning up after shim disconnected" id=a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502 namespace=k8s.io Dec 16 13:05:11.460745 containerd[1595]: time="2025-12-16T13:05:11.448940417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:05:11.460901 containerd[1595]: time="2025-12-16T13:05:11.456415916Z" level=info msg="shim disconnected" id=1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99 namespace=k8s.io Dec 16 13:05:11.461062 containerd[1595]: time="2025-12-16T13:05:11.460948903Z" level=warning msg="cleaning up after shim disconnected" id=1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99 namespace=k8s.io Dec 16 13:05:11.461062 containerd[1595]: time="2025-12-16T13:05:11.460959915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:05:11.491791 containerd[1595]: time="2025-12-16T13:05:11.491744813Z" level=info msg="received sandbox container exit event sandbox_id:\"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" exit_status:137 exited_at:{seconds:1765890311 nanos:387820074}" monitor_name=criService Dec 16 13:05:11.492491 containerd[1595]: time="2025-12-16T13:05:11.492430804Z" level=info msg="received sandbox container exit event sandbox_id:\"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" exit_status:137 exited_at:{seconds:1765890311 nanos:390087183}" monitor_name=criService Dec 16 13:05:11.494318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99-shm.mount: Deactivated successfully. Dec 16 13:05:11.494424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502-shm.mount: Deactivated successfully. Dec 16 13:05:11.494924 containerd[1595]: time="2025-12-16T13:05:11.494902762Z" level=info msg="TearDown network for sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" successfully" Dec 16 13:05:11.494924 containerd[1595]: time="2025-12-16T13:05:11.494923512Z" level=info msg="StopPodSandbox for \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" returns successfully" Dec 16 13:05:11.495046 containerd[1595]: time="2025-12-16T13:05:11.495026823Z" level=info msg="TearDown network for sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" successfully" Dec 16 13:05:11.495046 containerd[1595]: time="2025-12-16T13:05:11.495043355Z" level=info msg="StopPodSandbox for \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" returns successfully" Dec 16 13:05:11.657998 kubelet[2750]: I1216 13:05:11.657830 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-xtables-lock\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.658514 kubelet[2750]: I1216 13:05:11.658057 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658687 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-cilium-config-path\") pod \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\" (UID: \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\") " Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658731 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-hubble-tls\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658757 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-etc-cni-netd\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658777 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-run\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658811 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vltp\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.659409 kubelet[2750]: I1216 13:05:11.658833 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-hostproc\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658852 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-cgroup\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658875 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da8d6600-f98f-438f-8268-388ae32c6ee2-clustermesh-secrets\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658897 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-kernel\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658921 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-lib-modules\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658943 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4l84\" (UniqueName: \"kubernetes.io/projected/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-kube-api-access-g4l84\") pod \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\" (UID: \"c1d2016c-3ccc-4917-afdb-d3fc7be17e33\") " Dec 16 13:05:11.660032 kubelet[2750]: I1216 13:05:11.658968 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-bpf-maps\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.658989 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-net\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.659017 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-config-path\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.659036 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cni-path\") pod \"da8d6600-f98f-438f-8268-388ae32c6ee2\" (UID: \"da8d6600-f98f-438f-8268-388ae32c6ee2\") " Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.659090 2750 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-xtables-lock\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.659152 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cni-path" (OuterVolumeSpecName: "cni-path") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.660261 kubelet[2750]: I1216 13:05:11.660045 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.660598 kubelet[2750]: I1216 13:05:11.660123 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.661759 kubelet[2750]: I1216 13:05:11.661722 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.661759 kubelet[2750]: I1216 13:05:11.661760 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.662308 kubelet[2750]: I1216 13:05:11.662284 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.662420 kubelet[2750]: I1216 13:05:11.662402 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.663605 kubelet[2750]: I1216 13:05:11.663538 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-hostproc" (OuterVolumeSpecName: "hostproc") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.665135 kubelet[2750]: I1216 13:05:11.665001 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:05:11.669040 kubelet[2750]: I1216 13:05:11.667353 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:05:11.669040 kubelet[2750]: I1216 13:05:11.668681 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1d2016c-3ccc-4917-afdb-d3fc7be17e33" (UID: "c1d2016c-3ccc-4917-afdb-d3fc7be17e33"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:05:11.672312 kubelet[2750]: I1216 13:05:11.672237 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp" (OuterVolumeSpecName: "kube-api-access-9vltp") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "kube-api-access-9vltp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:05:11.672476 kubelet[2750]: I1216 13:05:11.672422 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8d6600-f98f-438f-8268-388ae32c6ee2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:05:11.672854 kubelet[2750]: I1216 13:05:11.672831 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da8d6600-f98f-438f-8268-388ae32c6ee2" (UID: "da8d6600-f98f-438f-8268-388ae32c6ee2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:05:11.672960 kubelet[2750]: I1216 13:05:11.672833 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-kube-api-access-g4l84" (OuterVolumeSpecName: "kube-api-access-g4l84") pod "c1d2016c-3ccc-4917-afdb-d3fc7be17e33" (UID: "c1d2016c-3ccc-4917-afdb-d3fc7be17e33"). InnerVolumeSpecName "kube-api-access-g4l84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:05:11.759367 kubelet[2750]: I1216 13:05:11.759311 2750 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-etc-cni-netd\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759367 kubelet[2750]: I1216 13:05:11.759353 2750 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-run\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759367 kubelet[2750]: I1216 13:05:11.759372 2750 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vltp\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-kube-api-access-9vltp\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759391 2750 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-hostproc\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759409 2750 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-cgroup\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759428 2750 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da8d6600-f98f-438f-8268-388ae32c6ee2-clustermesh-secrets\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759440 2750 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-kernel\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759456 2750 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-lib-modules\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759475 2750 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4l84\" (UniqueName: \"kubernetes.io/projected/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-kube-api-access-g4l84\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759492 2750 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-bpf-maps\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759668 kubelet[2750]: I1216 13:05:11.759509 2750 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-host-proc-sys-net\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759820 kubelet[2750]: I1216 13:05:11.759519 2750 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da8d6600-f98f-438f-8268-388ae32c6ee2-cilium-config-path\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759820 kubelet[2750]: I1216 13:05:11.759528 2750 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da8d6600-f98f-438f-8268-388ae32c6ee2-cni-path\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759820 kubelet[2750]: I1216 13:05:11.759538 2750 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1d2016c-3ccc-4917-afdb-d3fc7be17e33-cilium-config-path\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:11.759820 kubelet[2750]: I1216 13:05:11.759546 2750 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da8d6600-f98f-438f-8268-388ae32c6ee2-hubble-tls\") on node \"ci-4459-2-2-2-e3531eb256\" DevicePath \"\"" Dec 16 13:05:12.051271 kubelet[2750]: I1216 13:05:12.050805 2750 scope.go:117] "RemoveContainer" containerID="5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764" Dec 16 13:05:12.051136 systemd[1]: Removed slice kubepods-besteffort-podc1d2016c_3ccc_4917_afdb_d3fc7be17e33.slice - libcontainer container kubepods-besteffort-podc1d2016c_3ccc_4917_afdb_d3fc7be17e33.slice. Dec 16 13:05:12.057054 containerd[1595]: time="2025-12-16T13:05:12.056992476Z" level=info msg="RemoveContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\"" Dec 16 13:05:12.067764 containerd[1595]: time="2025-12-16T13:05:12.067687994Z" level=info msg="RemoveContainer for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" returns successfully" Dec 16 13:05:12.069743 kubelet[2750]: I1216 13:05:12.069621 2750 scope.go:117] "RemoveContainer" containerID="5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764" Dec 16 13:05:12.070139 containerd[1595]: time="2025-12-16T13:05:12.070012172Z" level=error msg="ContainerStatus for \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\": not found" Dec 16 13:05:12.072653 kubelet[2750]: E1216 13:05:12.072609 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\": not found" containerID="5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764" Dec 16 13:05:12.072764 kubelet[2750]: I1216 13:05:12.072649 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764"} err="failed to get container status \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\": rpc error: code = NotFound desc = an error occurred when try to find container \"5879d066c39248f338a6b3a2954b4fb313ae25059fe6ca60f652f60894071764\": not found" Dec 16 13:05:12.072764 kubelet[2750]: I1216 13:05:12.072695 2750 scope.go:117] "RemoveContainer" containerID="c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3" Dec 16 13:05:12.080951 containerd[1595]: time="2025-12-16T13:05:12.080808605Z" level=info msg="RemoveContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\"" Dec 16 13:05:12.086185 systemd[1]: Removed slice kubepods-burstable-podda8d6600_f98f_438f_8268_388ae32c6ee2.slice - libcontainer container kubepods-burstable-podda8d6600_f98f_438f_8268_388ae32c6ee2.slice. Dec 16 13:05:12.086453 systemd[1]: kubepods-burstable-podda8d6600_f98f_438f_8268_388ae32c6ee2.slice: Consumed 6.287s CPU time, 194.5M memory peak, 73M read from disk, 14.7M written to disk. Dec 16 13:05:12.090589 containerd[1595]: time="2025-12-16T13:05:12.090533800Z" level=info msg="RemoveContainer for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" returns successfully" Dec 16 13:05:12.091353 kubelet[2750]: I1216 13:05:12.091204 2750 scope.go:117] "RemoveContainer" containerID="dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b" Dec 16 13:05:12.095630 containerd[1595]: time="2025-12-16T13:05:12.095165094Z" level=info msg="RemoveContainer for \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\"" Dec 16 13:05:12.101938 containerd[1595]: time="2025-12-16T13:05:12.101899758Z" level=info msg="RemoveContainer for \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" returns successfully" Dec 16 13:05:12.102392 kubelet[2750]: I1216 13:05:12.102342 2750 scope.go:117] "RemoveContainer" containerID="e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e" Dec 16 13:05:12.106247 containerd[1595]: time="2025-12-16T13:05:12.106204949Z" level=info msg="RemoveContainer for \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\"" Dec 16 13:05:12.115121 containerd[1595]: time="2025-12-16T13:05:12.115029255Z" level=info msg="RemoveContainer for \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" returns successfully" Dec 16 13:05:12.118243 kubelet[2750]: I1216 13:05:12.118089 2750 scope.go:117] "RemoveContainer" containerID="52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa" Dec 16 13:05:12.125643 containerd[1595]: time="2025-12-16T13:05:12.125337972Z" level=info msg="RemoveContainer for \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\"" Dec 16 13:05:12.130150 containerd[1595]: time="2025-12-16T13:05:12.130095612Z" level=info msg="RemoveContainer for \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" returns successfully" Dec 16 13:05:12.131374 kubelet[2750]: I1216 13:05:12.130354 2750 scope.go:117] "RemoveContainer" containerID="adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291" Dec 16 13:05:12.132200 containerd[1595]: time="2025-12-16T13:05:12.132171248Z" level=info msg="RemoveContainer for \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\"" Dec 16 13:05:12.135807 containerd[1595]: time="2025-12-16T13:05:12.135778585Z" level=info msg="RemoveContainer for \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" returns successfully" Dec 16 13:05:12.136077 kubelet[2750]: I1216 13:05:12.136046 2750 scope.go:117] "RemoveContainer" containerID="c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3" Dec 16 13:05:12.136649 containerd[1595]: time="2025-12-16T13:05:12.136514022Z" level=error msg="ContainerStatus for \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\": not found" Dec 16 13:05:12.136810 kubelet[2750]: E1216 13:05:12.136778 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\": not found" containerID="c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3" Dec 16 13:05:12.136950 kubelet[2750]: I1216 13:05:12.136919 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3"} err="failed to get container status \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c360f21c62d79dac1dc9ef2c8ef1491460ecf0f428c857a0ac2ab9593ca59eb3\": not found" Dec 16 13:05:12.137173 kubelet[2750]: I1216 13:05:12.137046 2750 scope.go:117] "RemoveContainer" containerID="dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b" Dec 16 13:05:12.137532 containerd[1595]: time="2025-12-16T13:05:12.137486298Z" level=error msg="ContainerStatus for \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\": not found" Dec 16 13:05:12.138157 kubelet[2750]: E1216 13:05:12.137968 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\": not found" containerID="dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b" Dec 16 13:05:12.138157 kubelet[2750]: I1216 13:05:12.138082 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b"} err="failed to get container status \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e0721965f32615bcbcde1c09e81c091dfee12d1af12b82258178634ec5d8b\": not found" Dec 16 13:05:12.138157 kubelet[2750]: I1216 13:05:12.138106 2750 scope.go:117] "RemoveContainer" containerID="e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e" Dec 16 13:05:12.138710 containerd[1595]: time="2025-12-16T13:05:12.138667670Z" level=error msg="ContainerStatus for \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\": not found" Dec 16 13:05:12.139050 kubelet[2750]: E1216 13:05:12.139020 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\": not found" containerID="e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e" Dec 16 13:05:12.139490 kubelet[2750]: I1216 13:05:12.139173 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e"} err="failed to get container status \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e92596cfdc5dd67d5bf55f7a97e144d2f0ace1129065e0803074653d4864c73e\": not found" Dec 16 13:05:12.139490 kubelet[2750]: I1216 13:05:12.139200 2750 scope.go:117] "RemoveContainer" containerID="52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa" Dec 16 13:05:12.139647 containerd[1595]: time="2025-12-16T13:05:12.139405651Z" level=error msg="ContainerStatus for \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\": not found" Dec 16 13:05:12.139703 kubelet[2750]: E1216 13:05:12.139624 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\": not found" containerID="52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa" Dec 16 13:05:12.139703 kubelet[2750]: I1216 13:05:12.139661 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa"} err="failed to get container status \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"52e87d71b380402b5c0dd645dd8dc278b139c824d83ab71bf6981b94ac1094fa\": not found" Dec 16 13:05:12.139703 kubelet[2750]: I1216 13:05:12.139695 2750 scope.go:117] "RemoveContainer" containerID="adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291" Dec 16 13:05:12.140343 containerd[1595]: time="2025-12-16T13:05:12.140129726Z" level=error msg="ContainerStatus for \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\": not found" Dec 16 13:05:12.140419 kubelet[2750]: E1216 13:05:12.140288 2750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\": not found" containerID="adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291" Dec 16 13:05:12.140419 kubelet[2750]: I1216 13:05:12.140315 2750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291"} err="failed to get container status \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\": rpc error: code = NotFound desc = an error occurred when try to find container \"adb1c165ea312cf3a87cf133e7723d8fa722281c2a70867fb53d59630bee7291\": not found" Dec 16 13:05:12.325837 systemd[1]: var-lib-kubelet-pods-c1d2016c\x2d3ccc\x2d4917\x2dafdb\x2dd3fc7be17e33-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4l84.mount: Deactivated successfully. Dec 16 13:05:12.325974 systemd[1]: var-lib-kubelet-pods-da8d6600\x2df98f\x2d438f\x2d8268\x2d388ae32c6ee2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9vltp.mount: Deactivated successfully. Dec 16 13:05:12.326048 systemd[1]: var-lib-kubelet-pods-da8d6600\x2df98f\x2d438f\x2d8268\x2d388ae32c6ee2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:05:12.326128 systemd[1]: var-lib-kubelet-pods-da8d6600\x2df98f\x2d438f\x2d8268\x2d388ae32c6ee2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:05:13.358838 sshd[4272]: Connection closed by 139.178.89.65 port 34554 Dec 16 13:05:13.359138 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:13.364050 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:05:13.364926 systemd[1]: sshd@18-77.42.28.57:22-139.178.89.65:34554.service: Deactivated successfully. Dec 16 13:05:13.367645 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:05:13.369614 systemd-logind[1560]: Removed session 19. Dec 16 13:05:13.507962 systemd[1]: Started sshd@19-77.42.28.57:22-139.178.89.65:59696.service - OpenSSH per-connection server daemon (139.178.89.65:59696). Dec 16 13:05:13.605706 kubelet[2750]: I1216 13:05:13.605654 2750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d2016c-3ccc-4917-afdb-d3fc7be17e33" path="/var/lib/kubelet/pods/c1d2016c-3ccc-4917-afdb-d3fc7be17e33/volumes" Dec 16 13:05:13.606222 kubelet[2750]: I1216 13:05:13.606195 2750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8d6600-f98f-438f-8268-388ae32c6ee2" path="/var/lib/kubelet/pods/da8d6600-f98f-438f-8268-388ae32c6ee2/volumes" Dec 16 13:05:14.497457 sshd[4421]: Accepted publickey for core from 139.178.89.65 port 59696 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:05:14.498776 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:14.504000 systemd-logind[1560]: New session 20 of user core. Dec 16 13:05:14.509780 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:05:15.498255 systemd[1]: Created slice kubepods-burstable-pod33f7e711_2b31_4bc2_9c98_f6c672dbd0ca.slice - libcontainer container kubepods-burstable-pod33f7e711_2b31_4bc2_9c98_f6c672dbd0ca.slice. Dec 16 13:05:15.582494 kubelet[2750]: I1216 13:05:15.582442 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-host-proc-sys-net\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.582494 kubelet[2750]: I1216 13:05:15.582487 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-xtables-lock\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.582494 kubelet[2750]: I1216 13:05:15.582504 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-cilium-run\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.582494 kubelet[2750]: I1216 13:05:15.582516 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-cilium-cgroup\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582545 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-hostproc\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582582 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-cni-path\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582597 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-cilium-config-path\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582657 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmcg\" (UniqueName: \"kubernetes.io/projected/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-kube-api-access-fvmcg\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582694 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-etc-cni-netd\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583506 kubelet[2750]: I1216 13:05:15.582714 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-lib-modules\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583764 kubelet[2750]: I1216 13:05:15.582730 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-clustermesh-secrets\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583764 kubelet[2750]: I1216 13:05:15.582762 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-cilium-ipsec-secrets\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583764 kubelet[2750]: I1216 13:05:15.582804 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-host-proc-sys-kernel\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583764 kubelet[2750]: I1216 13:05:15.582832 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-bpf-maps\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.583764 kubelet[2750]: I1216 13:05:15.582862 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f7e711-2b31-4bc2-9c98-f6c672dbd0ca-hubble-tls\") pod \"cilium-vdfqv\" (UID: \"33f7e711-2b31-4bc2-9c98-f6c672dbd0ca\") " pod="kube-system/cilium-vdfqv" Dec 16 13:05:15.683582 sshd[4424]: Connection closed by 139.178.89.65 port 59696 Dec 16 13:05:15.684711 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:15.708610 kubelet[2750]: E1216 13:05:15.707757 2750 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:05:15.708715 systemd[1]: sshd@19-77.42.28.57:22-139.178.89.65:59696.service: Deactivated successfully. Dec 16 13:05:15.712233 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:05:15.716677 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:05:15.725932 systemd-logind[1560]: Removed session 20. Dec 16 13:05:15.802369 containerd[1595]: time="2025-12-16T13:05:15.802271450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdfqv,Uid:33f7e711-2b31-4bc2-9c98-f6c672dbd0ca,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:15.819123 containerd[1595]: time="2025-12-16T13:05:15.819076869Z" level=info msg="connecting to shim a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:15.843798 systemd[1]: Started cri-containerd-a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a.scope - libcontainer container a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a. Dec 16 13:05:15.852897 systemd[1]: Started sshd@20-77.42.28.57:22-139.178.89.65:59700.service - OpenSSH per-connection server daemon (139.178.89.65:59700). Dec 16 13:05:15.884496 containerd[1595]: time="2025-12-16T13:05:15.884406806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdfqv,Uid:33f7e711-2b31-4bc2-9c98-f6c672dbd0ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\"" Dec 16 13:05:15.890778 containerd[1595]: time="2025-12-16T13:05:15.890726808Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:05:15.898469 containerd[1595]: time="2025-12-16T13:05:15.898422864Z" level=info msg="Container d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:15.902584 containerd[1595]: time="2025-12-16T13:05:15.902496134Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f\"" Dec 16 13:05:15.904167 containerd[1595]: time="2025-12-16T13:05:15.904046316Z" level=info msg="StartContainer for \"d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f\"" Dec 16 13:05:15.905718 containerd[1595]: time="2025-12-16T13:05:15.905638430Z" level=info msg="connecting to shim d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" protocol=ttrpc version=3 Dec 16 13:05:15.921724 systemd[1]: Started cri-containerd-d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f.scope - libcontainer container d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f. Dec 16 13:05:15.954990 containerd[1595]: time="2025-12-16T13:05:15.954947186Z" level=info msg="StartContainer for \"d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f\" returns successfully" Dec 16 13:05:15.967504 systemd[1]: cri-containerd-d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f.scope: Deactivated successfully. Dec 16 13:05:15.967804 systemd[1]: cri-containerd-d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f.scope: Consumed 19ms CPU time, 9.6M memory peak, 3.2M read from disk. Dec 16 13:05:15.972416 containerd[1595]: time="2025-12-16T13:05:15.972248999Z" level=info msg="received container exit event container_id:\"d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f\" id:\"d5c5857cc5a3ec46aa722e13ff35d99a211b540ee204f297c2d1f7641aeb8f2f\" pid:4501 exited_at:{seconds:1765890315 nanos:969946268}" Dec 16 13:05:16.083749 containerd[1595]: time="2025-12-16T13:05:16.083429961Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:05:16.090604 containerd[1595]: time="2025-12-16T13:05:16.090536749Z" level=info msg="Container 846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:16.108538 containerd[1595]: time="2025-12-16T13:05:16.108490407Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2\"" Dec 16 13:05:16.110440 containerd[1595]: time="2025-12-16T13:05:16.110414823Z" level=info msg="StartContainer for \"846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2\"" Dec 16 13:05:16.111595 containerd[1595]: time="2025-12-16T13:05:16.111527667Z" level=info msg="connecting to shim 846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" protocol=ttrpc version=3 Dec 16 13:05:16.133712 systemd[1]: Started cri-containerd-846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2.scope - libcontainer container 846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2. Dec 16 13:05:16.161017 containerd[1595]: time="2025-12-16T13:05:16.160935410Z" level=info msg="StartContainer for \"846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2\" returns successfully" Dec 16 13:05:16.168718 systemd[1]: cri-containerd-846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2.scope: Deactivated successfully. Dec 16 13:05:16.169453 systemd[1]: cri-containerd-846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2.scope: Consumed 17ms CPU time, 7.4M memory peak, 2.2M read from disk. Dec 16 13:05:16.169764 containerd[1595]: time="2025-12-16T13:05:16.169735707Z" level=info msg="received container exit event container_id:\"846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2\" id:\"846794f201e53a18ba6f6ba8ccb3fc6bc3b87524c59d38c859ecd06109af62e2\" pid:4546 exited_at:{seconds:1765890316 nanos:169112261}" Dec 16 13:05:16.841418 sshd[4479]: Accepted publickey for core from 139.178.89.65 port 59700 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:05:16.842851 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:16.848734 systemd-logind[1560]: New session 21 of user core. Dec 16 13:05:16.860794 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:05:17.092428 containerd[1595]: time="2025-12-16T13:05:17.092264997Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:05:17.113619 containerd[1595]: time="2025-12-16T13:05:17.112088318Z" level=info msg="Container 0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:17.134460 containerd[1595]: time="2025-12-16T13:05:17.134350989Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1\"" Dec 16 13:05:17.135987 containerd[1595]: time="2025-12-16T13:05:17.135909626Z" level=info msg="StartContainer for \"0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1\"" Dec 16 13:05:17.140918 containerd[1595]: time="2025-12-16T13:05:17.140848022Z" level=info msg="connecting to shim 0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" protocol=ttrpc version=3 Dec 16 13:05:17.186876 systemd[1]: Started cri-containerd-0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1.scope - libcontainer container 0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1. Dec 16 13:05:17.295055 containerd[1595]: time="2025-12-16T13:05:17.294970847Z" level=info msg="StartContainer for \"0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1\" returns successfully" Dec 16 13:05:17.298841 systemd[1]: cri-containerd-0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1.scope: Deactivated successfully. Dec 16 13:05:17.302126 containerd[1595]: time="2025-12-16T13:05:17.302083691Z" level=info msg="received container exit event container_id:\"0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1\" id:\"0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1\" pid:4590 exited_at:{seconds:1765890317 nanos:301801946}" Dec 16 13:05:17.327851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f14d80c0c3ea0361d4aba96e8b5a4ff63e57f9addcbd167f87685559b9b84b1-rootfs.mount: Deactivated successfully. Dec 16 13:05:17.517950 sshd[4577]: Connection closed by 139.178.89.65 port 59700 Dec 16 13:05:17.519263 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:17.525409 systemd[1]: sshd@20-77.42.28.57:22-139.178.89.65:59700.service: Deactivated successfully. Dec 16 13:05:17.527809 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:05:17.531638 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:05:17.534645 systemd-logind[1560]: Removed session 21. Dec 16 13:05:17.690756 systemd[1]: Started sshd@21-77.42.28.57:22-139.178.89.65:59710.service - OpenSSH per-connection server daemon (139.178.89.65:59710). Dec 16 13:05:18.103550 containerd[1595]: time="2025-12-16T13:05:18.103386627Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:05:18.124902 containerd[1595]: time="2025-12-16T13:05:18.124746784Z" level=info msg="Container c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:18.137415 containerd[1595]: time="2025-12-16T13:05:18.137351495Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e\"" Dec 16 13:05:18.138365 containerd[1595]: time="2025-12-16T13:05:18.138331270Z" level=info msg="StartContainer for \"c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e\"" Dec 16 13:05:18.141743 containerd[1595]: time="2025-12-16T13:05:18.141554845Z" level=info msg="connecting to shim c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" protocol=ttrpc version=3 Dec 16 13:05:18.177933 systemd[1]: Started cri-containerd-c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e.scope - libcontainer container c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e. Dec 16 13:05:18.227288 systemd[1]: cri-containerd-c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e.scope: Deactivated successfully. Dec 16 13:05:18.234864 containerd[1595]: time="2025-12-16T13:05:18.234793624Z" level=info msg="received container exit event container_id:\"c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e\" id:\"c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e\" pid:4639 exited_at:{seconds:1765890318 nanos:233694477}" Dec 16 13:05:18.250012 containerd[1595]: time="2025-12-16T13:05:18.249941163Z" level=info msg="StartContainer for \"c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e\" returns successfully" Dec 16 13:05:18.274547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a774676d303e74154ea33b9eff4f9483a26950fe8f8e6687968222cda7dd6e-rootfs.mount: Deactivated successfully. Dec 16 13:05:18.685484 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 59710 ssh2: RSA SHA256:ZUC5+jwMPGmdjOY75CPCzVYpIXnBtNPXtAIGEYlroCc Dec 16 13:05:18.687882 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:18.695734 systemd-logind[1560]: New session 22 of user core. Dec 16 13:05:18.707809 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:05:19.109552 containerd[1595]: time="2025-12-16T13:05:19.109500869Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:05:19.126606 containerd[1595]: time="2025-12-16T13:05:19.125216193Z" level=info msg="Container ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:19.141829 containerd[1595]: time="2025-12-16T13:05:19.141782564Z" level=info msg="CreateContainer within sandbox \"a85343991212927919e8fff9f541c14075af5d804d07f7c7117a84b06fffc64a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6\"" Dec 16 13:05:19.142587 containerd[1595]: time="2025-12-16T13:05:19.142519107Z" level=info msg="StartContainer for \"ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6\"" Dec 16 13:05:19.143654 containerd[1595]: time="2025-12-16T13:05:19.143581111Z" level=info msg="connecting to shim ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6" address="unix:///run/containerd/s/d8572225efb5a2c86163c3c2e141830e5fd3ec9f866f2f3524edc76b354c710d" protocol=ttrpc version=3 Dec 16 13:05:19.173737 systemd[1]: Started cri-containerd-ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6.scope - libcontainer container ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6. Dec 16 13:05:19.238995 kubelet[2750]: I1216 13:05:19.238334 2750 setters.go:618] "Node became not ready" node="ci-4459-2-2-2-e3531eb256" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:05:19Z","lastTransitionTime":"2025-12-16T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:05:19.257007 containerd[1595]: time="2025-12-16T13:05:19.255420045Z" level=info msg="StartContainer for \"ca471ca424f00e3834ce4ddfc75d9c75092c43c7965f5a75ea8ccb3b0dbcb9a6\" returns successfully" Dec 16 13:05:19.793603 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:05:20.140460 kubelet[2750]: I1216 13:05:20.137819 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vdfqv" podStartSLOduration=5.137801518 podStartE2EDuration="5.137801518s" podCreationTimestamp="2025-12-16 13:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:20.135912778 +0000 UTC m=+144.633853120" watchObservedRunningTime="2025-12-16 13:05:20.137801518 +0000 UTC m=+144.635741870" Dec 16 13:05:22.819958 systemd-networkd[1443]: lxc_health: Link UP Dec 16 13:05:22.820215 systemd-networkd[1443]: lxc_health: Gained carrier Dec 16 13:05:24.647942 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 16 13:05:28.408290 sshd[4664]: Connection closed by 139.178.89.65 port 59710 Dec 16 13:05:28.409480 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:28.421198 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:05:28.421872 systemd[1]: sshd@21-77.42.28.57:22-139.178.89.65:59710.service: Deactivated successfully. Dec 16 13:05:28.424043 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:05:28.426281 systemd-logind[1560]: Removed session 22. Dec 16 13:05:47.606713 systemd[1]: cri-containerd-8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297.scope: Deactivated successfully. Dec 16 13:05:47.608001 systemd[1]: cri-containerd-8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297.scope: Consumed 3.192s CPU time, 71M memory peak, 19.2M read from disk. Dec 16 13:05:47.612990 containerd[1595]: time="2025-12-16T13:05:47.612918481Z" level=info msg="received container exit event container_id:\"8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297\" id:\"8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297\" pid:2598 exit_status:1 exited_at:{seconds:1765890347 nanos:612093129}" Dec 16 13:05:47.649909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297-rootfs.mount: Deactivated successfully. Dec 16 13:05:47.830699 kubelet[2750]: E1216 13:05:47.830461 2750 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39968->10.0.0.2:2379: read: connection timed out" Dec 16 13:05:47.838713 systemd[1]: cri-containerd-c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e.scope: Deactivated successfully. Dec 16 13:05:47.839224 systemd[1]: cri-containerd-c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e.scope: Consumed 1.884s CPU time, 30.8M memory peak, 10.4M read from disk. Dec 16 13:05:47.847154 containerd[1595]: time="2025-12-16T13:05:47.846961942Z" level=info msg="received container exit event container_id:\"c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e\" id:\"c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e\" pid:2560 exit_status:1 exited_at:{seconds:1765890347 nanos:843985783}" Dec 16 13:05:47.882026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e-rootfs.mount: Deactivated successfully. Dec 16 13:05:48.193519 kubelet[2750]: I1216 13:05:48.193379 2750 scope.go:117] "RemoveContainer" containerID="c27f250ecbafe4bb780f572192197abde8c2e60b93a44df74414fa60c6320b9e" Dec 16 13:05:48.196701 kubelet[2750]: I1216 13:05:48.196646 2750 scope.go:117] "RemoveContainer" containerID="8895912bd42b24d0c1f2addac53944a197bfb769afe90d4f1b2c8c4130af6297" Dec 16 13:05:48.199208 containerd[1595]: time="2025-12-16T13:05:48.199143830Z" level=info msg="CreateContainer within sandbox \"6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 13:05:48.199770 containerd[1595]: time="2025-12-16T13:05:48.199642275Z" level=info msg="CreateContainer within sandbox \"6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 13:05:48.228712 containerd[1595]: time="2025-12-16T13:05:48.226715834Z" level=info msg="Container 4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:48.227303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704346589.mount: Deactivated successfully. Dec 16 13:05:48.231676 containerd[1595]: time="2025-12-16T13:05:48.231638436Z" level=info msg="Container 74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:48.243962 containerd[1595]: time="2025-12-16T13:05:48.243892050Z" level=info msg="CreateContainer within sandbox \"6f3118b0b6895f33bf6a3582449105b4a5f2d9ba21a585dcb40a1afba9cfee0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2\"" Dec 16 13:05:48.244516 containerd[1595]: time="2025-12-16T13:05:48.244480759Z" level=info msg="StartContainer for \"4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2\"" Dec 16 13:05:48.246595 containerd[1595]: time="2025-12-16T13:05:48.245660371Z" level=info msg="connecting to shim 4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2" address="unix:///run/containerd/s/b1b26c5fc6f272c976a3d1974c5426c5b1d2bc79eb0469809d9a42aa176340cd" protocol=ttrpc version=3 Dec 16 13:05:48.246758 containerd[1595]: time="2025-12-16T13:05:48.246720105Z" level=info msg="CreateContainer within sandbox \"6a202f91305441f295b973ab999ab85c8559083ff26258fa587c3020440da2f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c\"" Dec 16 13:05:48.247391 containerd[1595]: time="2025-12-16T13:05:48.247342058Z" level=info msg="StartContainer for \"74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c\"" Dec 16 13:05:48.249533 containerd[1595]: time="2025-12-16T13:05:48.249495378Z" level=info msg="connecting to shim 74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c" address="unix:///run/containerd/s/0fc2bc1e27919f6f2e62dff5df2cc0c28866d548b437e416239017169291c9ea" protocol=ttrpc version=3 Dec 16 13:05:48.276807 systemd[1]: Started cri-containerd-4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2.scope - libcontainer container 4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2. Dec 16 13:05:48.291751 systemd[1]: Started cri-containerd-74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c.scope - libcontainer container 74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c. Dec 16 13:05:48.365240 containerd[1595]: time="2025-12-16T13:05:48.365200038Z" level=info msg="StartContainer for \"4c24ac146b9b33a8db2c57793c00c7caedbab07d0c8f3d9a563cdbf1042218f2\" returns successfully" Dec 16 13:05:48.366801 containerd[1595]: time="2025-12-16T13:05:48.366768496Z" level=info msg="StartContainer for \"74adf2c5cf2000345b654989cf3d7845556718c11d49a35805e07929dfd51f4c\" returns successfully" Dec 16 13:05:50.699521 kubelet[2750]: E1216 13:05:50.697815 2750 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39740->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-2-2-e3531eb256.1881b3e889df0534 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-2-2-e3531eb256,UID:d061f4c845943bfa636a9cf2d2f6cc4a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-2-e3531eb256,},FirstTimestamp:2025-12-16 13:05:40.232824116 +0000 UTC m=+164.730764468,LastTimestamp:2025-12-16 13:05:40.232824116 +0000 UTC m=+164.730764468,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-2-e3531eb256,}" Dec 16 13:05:55.623432 containerd[1595]: time="2025-12-16T13:05:55.623387503Z" level=info msg="StopPodSandbox for \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\"" Dec 16 13:05:55.623824 containerd[1595]: time="2025-12-16T13:05:55.623518814Z" level=info msg="TearDown network for sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" successfully" Dec 16 13:05:55.623824 containerd[1595]: time="2025-12-16T13:05:55.623530135Z" level=info msg="StopPodSandbox for \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" returns successfully" Dec 16 13:05:55.623905 containerd[1595]: time="2025-12-16T13:05:55.623880237Z" level=info msg="RemovePodSandbox for \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\"" Dec 16 13:05:55.623949 containerd[1595]: time="2025-12-16T13:05:55.623906186Z" level=info msg="Forcibly stopping sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\"" Dec 16 13:05:55.624019 containerd[1595]: time="2025-12-16T13:05:55.623966712Z" level=info msg="TearDown network for sandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" successfully" Dec 16 13:05:55.625117 containerd[1595]: time="2025-12-16T13:05:55.625076098Z" level=info msg="Ensure that sandbox 1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99 in task-service has been cleanup successfully" Dec 16 13:05:55.629292 containerd[1595]: time="2025-12-16T13:05:55.629236559Z" level=info msg="RemovePodSandbox \"1770999403e483c4689d540a3a39c89606faf5562cb3108a8bb6d27ce5b14b99\" returns successfully" Dec 16 13:05:55.629608 containerd[1595]: time="2025-12-16T13:05:55.629582452Z" level=info msg="StopPodSandbox for \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\"" Dec 16 13:05:55.629741 containerd[1595]: time="2025-12-16T13:05:55.629707191Z" level=info msg="TearDown network for sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" successfully" Dec 16 13:05:55.629842 containerd[1595]: time="2025-12-16T13:05:55.629735226Z" level=info msg="StopPodSandbox for \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" returns successfully" Dec 16 13:05:55.630139 containerd[1595]: time="2025-12-16T13:05:55.630087720Z" level=info msg="RemovePodSandbox for \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\"" Dec 16 13:05:55.630139 containerd[1595]: time="2025-12-16T13:05:55.630114873Z" level=info msg="Forcibly stopping sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\"" Dec 16 13:05:55.630218 containerd[1595]: time="2025-12-16T13:05:55.630172793Z" level=info msg="TearDown network for sandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" successfully" Dec 16 13:05:55.631108 containerd[1595]: time="2025-12-16T13:05:55.631066466Z" level=info msg="Ensure that sandbox a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502 in task-service has been cleanup successfully" Dec 16 13:05:55.634356 containerd[1595]: time="2025-12-16T13:05:55.634287989Z" level=info msg="RemovePodSandbox \"a0413a00d2546d04e0791acbc6ee7225211efa5bd2af3843e2e81f9bd60fc502\" returns successfully" Dec 16 13:05:56.713467 kubelet[2750]: I1216 13:05:56.713419 2750 status_manager.go:895] "Failed to get status for pod" podUID="d061f4c845943bfa636a9cf2d2f6cc4a" pod="kube-system/kube-apiserver-ci-4459-2-2-2-e3531eb256" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39852->10.0.0.2:2379: read: connection timed out" Dec 16 13:05:57.833232 kubelet[2750]: E1216 13:05:57.833182 2750 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4459-2-2-2-e3531eb256)"