Sep 16 04:53:15.802454 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:53:15.802472 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:15.802478 kernel: BIOS-provided physical RAM map: Sep 16 04:53:15.802482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 16 04:53:15.802486 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 16 04:53:15.802490 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 16 04:53:15.802496 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 16 04:53:15.802500 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 16 04:53:15.802504 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 16 04:53:15.802507 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 16 04:53:15.802511 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:53:15.802515 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 16 04:53:15.802519 kernel: NX (Execute Disable) protection: active Sep 16 04:53:15.802523 kernel: APIC: Static calls initialized Sep 16 04:53:15.802529 kernel: SMBIOS 2.8 present. Sep 16 04:53:15.802533 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 16 04:53:15.802537 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:53:15.802541 kernel: Hypervisor detected: KVM Sep 16 04:53:15.802545 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:53:15.802549 kernel: kvm-clock: using sched offset of 3950748641 cycles Sep 16 04:53:15.802554 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:53:15.802558 kernel: tsc: Detected 2399.998 MHz processor Sep 16 04:53:15.802564 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:53:15.802568 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:53:15.802573 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 16 04:53:15.802577 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 16 04:53:15.802581 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:53:15.802585 kernel: Using GB pages for direct mapping Sep 16 04:53:15.802589 kernel: ACPI: Early table checksum verification disabled Sep 16 04:53:15.802594 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 16 04:53:15.802598 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802603 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802607 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802612 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 16 04:53:15.802616 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802620 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802624 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802628 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:15.802632 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 16 04:53:15.802637 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 16 04:53:15.802644 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 16 04:53:15.802648 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 16 04:53:15.802653 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 16 04:53:15.802668 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 16 04:53:15.802672 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 16 04:53:15.802677 kernel: No NUMA configuration found Sep 16 04:53:15.802682 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 16 04:53:15.802686 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Sep 16 04:53:15.802691 kernel: Zone ranges: Sep 16 04:53:15.802695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:53:15.802699 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 16 04:53:15.802704 kernel: Normal empty Sep 16 04:53:15.802708 kernel: Device empty Sep 16 04:53:15.802712 kernel: Movable zone start for each node Sep 16 04:53:15.802717 kernel: Early memory node ranges Sep 16 04:53:15.802722 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 16 04:53:15.802727 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 16 04:53:15.802732 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 16 04:53:15.802736 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:53:15.802741 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 16 04:53:15.802745 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 16 04:53:15.802750 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 04:53:15.802754 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:53:15.802759 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:53:15.802766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 04:53:15.802776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:53:15.802788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:53:15.802798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:53:15.802806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:53:15.802814 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:53:15.802819 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:53:15.802824 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:53:15.802828 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:53:15.802835 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:53:15.802840 kernel: CPU topo: Max. threads per core: 1 Sep 16 04:53:15.802844 kernel: CPU topo: Num. cores per package: 2 Sep 16 04:53:15.802848 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:53:15.802853 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:53:15.802857 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:53:15.802862 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 16 04:53:15.802866 kernel: Booting paravirtualized kernel on KVM Sep 16 04:53:15.802871 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:53:15.802875 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:53:15.802881 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:53:15.802886 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:53:15.802890 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:53:15.802894 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 16 04:53:15.802900 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:15.802905 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:53:15.802909 kernel: random: crng init done Sep 16 04:53:15.802914 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:53:15.802919 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:53:15.802924 kernel: Fallback order for Node 0: 0 Sep 16 04:53:15.802928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Sep 16 04:53:15.802933 kernel: Policy zone: DMA32 Sep 16 04:53:15.802937 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:53:15.802941 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:53:15.802946 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:53:15.802950 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:53:15.802954 kernel: Dynamic Preempt: voluntary Sep 16 04:53:15.802960 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:53:15.802965 kernel: rcu: RCU event tracing is enabled. Sep 16 04:53:15.802970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:53:15.802975 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:53:15.802979 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:53:15.802984 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:53:15.802988 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:53:15.802993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:53:15.802997 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:15.803003 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:15.803007 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:15.803012 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 04:53:15.803016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:53:15.803021 kernel: Console: colour VGA+ 80x25 Sep 16 04:53:15.803025 kernel: printk: legacy console [tty0] enabled Sep 16 04:53:15.803029 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:53:15.803034 kernel: ACPI: Core revision 20240827 Sep 16 04:53:15.803039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 04:53:15.803047 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:53:15.803052 kernel: x2apic enabled Sep 16 04:53:15.803057 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:53:15.803062 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 04:53:15.803067 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Sep 16 04:53:15.803072 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Sep 16 04:53:15.803077 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:53:15.803081 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 04:53:15.803086 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 04:53:15.803091 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:53:15.803096 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:53:15.803101 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:53:15.803105 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 04:53:15.803110 kernel: active return thunk: retbleed_return_thunk Sep 16 04:53:15.803115 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 04:53:15.803119 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:53:15.803124 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:53:15.803130 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:53:15.803135 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:53:15.803139 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:53:15.803144 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:53:15.803149 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 04:53:15.803153 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:53:15.803158 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:53:15.803163 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:53:15.803167 kernel: landlock: Up and running. Sep 16 04:53:15.803173 kernel: SELinux: Initializing. Sep 16 04:53:15.803193 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:53:15.803198 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:53:15.803206 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 04:53:15.803219 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 04:53:15.803229 kernel: ... version: 0 Sep 16 04:53:15.803239 kernel: ... bit width: 48 Sep 16 04:53:15.803246 kernel: ... generic registers: 6 Sep 16 04:53:15.803252 kernel: ... value mask: 0000ffffffffffff Sep 16 04:53:15.803260 kernel: ... max period: 00007fffffffffff Sep 16 04:53:15.803265 kernel: ... fixed-purpose events: 0 Sep 16 04:53:15.803270 kernel: ... event mask: 000000000000003f Sep 16 04:53:15.803281 kernel: signal: max sigframe size: 1776 Sep 16 04:53:15.803287 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:53:15.803295 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:53:15.803303 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:53:15.803312 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:53:15.803320 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:53:15.803328 kernel: .... node #0, CPUs: #1 Sep 16 04:53:15.803333 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:53:15.803337 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Sep 16 04:53:15.803343 kernel: Memory: 1917788K/2047464K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 125140K reserved, 0K cma-reserved) Sep 16 04:53:15.803347 kernel: devtmpfs: initialized Sep 16 04:53:15.803352 kernel: x86/mm: Memory block size: 128MB Sep 16 04:53:15.803357 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:53:15.803362 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:53:15.803366 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:53:15.803372 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:53:15.803377 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:53:15.803382 kernel: audit: type=2000 audit(1757998393.268:1): state=initialized audit_enabled=0 res=1 Sep 16 04:53:15.803387 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:53:15.803391 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:53:15.803396 kernel: cpuidle: using governor menu Sep 16 04:53:15.803401 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:53:15.803406 kernel: dca service started, version 1.12.1 Sep 16 04:53:15.803410 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 16 04:53:15.803416 kernel: PCI: Using configuration type 1 for base access Sep 16 04:53:15.803421 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:53:15.803426 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:53:15.803430 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:53:15.803435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:53:15.803440 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:53:15.803444 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:53:15.803449 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:53:15.803454 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:53:15.803460 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:53:15.803464 kernel: ACPI: Interpreter enabled Sep 16 04:53:15.803469 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:53:15.803474 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:53:15.803479 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:53:15.803484 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:53:15.803488 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 04:53:15.803493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:53:15.803590 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:53:15.803640 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 04:53:15.803682 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 04:53:15.803689 kernel: PCI host bridge to bus 0000:00 Sep 16 04:53:15.803739 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:53:15.803798 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:53:15.803857 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:53:15.803899 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 16 04:53:15.803935 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 16 04:53:15.803971 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 16 04:53:15.804008 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:53:15.804062 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:53:15.804115 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:53:15.804160 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Sep 16 04:53:15.804220 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Sep 16 04:53:15.804262 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Sep 16 04:53:15.804313 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Sep 16 04:53:15.804356 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:53:15.804443 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.804513 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Sep 16 04:53:15.804561 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:15.804603 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:15.804645 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:15.804694 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.804736 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Sep 16 04:53:15.804815 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:15.804864 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:15.804910 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:15.804957 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.805001 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Sep 16 04:53:15.805042 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:15.805084 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:15.805125 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:15.805171 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.805234 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Sep 16 04:53:15.805285 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:15.805327 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:15.805367 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:15.805413 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.805454 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Sep 16 04:53:15.805494 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:15.805541 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:15.805624 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:15.805696 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.805742 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Sep 16 04:53:15.805822 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:15.805869 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 16 04:53:15.805911 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:15.805960 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.806015 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Sep 16 04:53:15.806056 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:15.806097 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:15.806137 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:15.806206 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.806255 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Sep 16 04:53:15.806310 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:15.806368 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:15.806410 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:15.806457 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:15.806500 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Sep 16 04:53:15.806544 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:15.806585 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:15.806628 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:15.806676 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:53:15.806736 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 04:53:15.806815 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 04:53:15.806870 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Sep 16 04:53:15.806912 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Sep 16 04:53:15.806965 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 04:53:15.807007 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 16 04:53:15.807057 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:53:15.807101 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Sep 16 04:53:15.807145 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 16 04:53:15.807216 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Sep 16 04:53:15.807268 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:15.807335 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 16 04:53:15.807378 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Sep 16 04:53:15.807421 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:15.807472 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Sep 16 04:53:15.807516 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Sep 16 04:53:15.807559 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 16 04:53:15.807601 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:15.807651 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:53:15.807694 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 16 04:53:15.807736 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:15.807788 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:53:15.807831 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Sep 16 04:53:15.807875 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:15.807949 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Sep 16 04:53:15.808012 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Sep 16 04:53:15.808056 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Sep 16 04:53:15.808100 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:15.808106 kernel: acpiphp: Slot [0] registered Sep 16 04:53:15.808157 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:53:15.808231 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Sep 16 04:53:15.808290 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Sep 16 04:53:15.808334 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Sep 16 04:53:15.808377 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:15.808384 kernel: acpiphp: Slot [0-2] registered Sep 16 04:53:15.808425 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:15.808431 kernel: acpiphp: Slot [0-3] registered Sep 16 04:53:15.808472 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:15.808481 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:53:15.808485 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:53:15.808490 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:53:15.808495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:53:15.808500 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 04:53:15.808505 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 04:53:15.808509 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 04:53:15.808514 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 04:53:15.808519 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 04:53:15.808525 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 04:53:15.808529 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 04:53:15.808534 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 04:53:15.808539 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 04:53:15.808544 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 04:53:15.808548 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 04:53:15.808553 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 04:53:15.808558 kernel: iommu: Default domain type: Translated Sep 16 04:53:15.808563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:53:15.808568 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:53:15.808573 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:53:15.808578 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 16 04:53:15.808582 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 16 04:53:15.808625 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 04:53:15.808666 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 04:53:15.808708 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:53:15.808714 kernel: vgaarb: loaded Sep 16 04:53:15.808719 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 04:53:15.808724 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 04:53:15.808729 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:53:15.808734 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:53:15.808739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:53:15.808744 kernel: pnp: PnP ACPI init Sep 16 04:53:15.808792 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 16 04:53:15.808799 kernel: pnp: PnP ACPI: found 5 devices Sep 16 04:53:15.808804 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:53:15.808810 kernel: NET: Registered PF_INET protocol family Sep 16 04:53:15.808814 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:53:15.808819 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 16 04:53:15.808824 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:53:15.808829 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:53:15.808834 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 16 04:53:15.808839 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 16 04:53:15.808844 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:53:15.808848 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:53:15.808854 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:53:15.808859 kernel: NET: Registered PF_XDP protocol family Sep 16 04:53:15.808902 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 16 04:53:15.808945 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 16 04:53:15.808987 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 16 04:53:15.809029 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Sep 16 04:53:15.809091 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Sep 16 04:53:15.809148 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Sep 16 04:53:15.809218 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:15.809269 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:15.809321 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:15.809363 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:15.809403 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:15.809444 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:15.809486 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:15.809527 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:15.809567 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:15.809609 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:15.809650 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:15.809691 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:15.809732 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:15.809817 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:15.809873 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:15.809918 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:15.809962 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 16 04:53:15.810004 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:15.810047 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:15.810089 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 16 04:53:15.810131 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:15.810187 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:15.810271 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:15.810349 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 16 04:53:15.810394 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:15.810436 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:15.810477 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:15.810518 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 16 04:53:15.810559 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:15.810601 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:15.810645 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:53:15.810686 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:53:15.810726 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:53:15.810798 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 16 04:53:15.810844 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 16 04:53:15.810883 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 16 04:53:15.810928 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:15.810967 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:15.811015 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:15.811054 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:15.811097 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:15.811136 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:15.811194 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:15.811235 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:15.811290 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:15.811329 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:15.811374 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 16 04:53:15.811449 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:15.811519 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 16 04:53:15.811579 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:15.811623 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:15.811666 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 16 04:53:15.811704 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:15.811741 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:15.811825 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 16 04:53:15.811878 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:15.811917 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:15.811928 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 04:53:15.811934 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:53:15.811939 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Sep 16 04:53:15.811944 kernel: Initialise system trusted keyrings Sep 16 04:53:15.811949 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 16 04:53:15.811954 kernel: Key type asymmetric registered Sep 16 04:53:15.811959 kernel: Asymmetric key parser 'x509' registered Sep 16 04:53:15.811964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:53:15.811969 kernel: io scheduler mq-deadline registered Sep 16 04:53:15.811975 kernel: io scheduler kyber registered Sep 16 04:53:15.811980 kernel: io scheduler bfq registered Sep 16 04:53:15.812032 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 16 04:53:15.812085 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 16 04:53:15.812129 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 16 04:53:15.812170 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 16 04:53:15.812237 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 16 04:53:15.812290 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 16 04:53:15.812334 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 16 04:53:15.812380 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 16 04:53:15.812421 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 16 04:53:15.812462 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 16 04:53:15.812506 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 16 04:53:15.812581 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 16 04:53:15.812645 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 16 04:53:15.812698 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 16 04:53:15.812745 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 16 04:53:15.812836 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 16 04:53:15.812848 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 04:53:15.812902 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 16 04:53:15.812950 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 16 04:53:15.812956 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:53:15.812965 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 16 04:53:15.812970 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:53:15.812975 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:53:15.812981 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:53:15.812986 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:53:15.812991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:53:15.813046 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 16 04:53:15.813054 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:53:15.813092 kernel: rtc_cmos 00:03: registered as rtc0 Sep 16 04:53:15.813132 kernel: rtc_cmos 00:03: setting system clock to 2025-09-16T04:53:15 UTC (1757998395) Sep 16 04:53:15.813169 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 16 04:53:15.813190 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 04:53:15.813195 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:53:15.813201 kernel: Segment Routing with IPv6 Sep 16 04:53:15.813206 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:53:15.813211 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:53:15.813216 kernel: Key type dns_resolver registered Sep 16 04:53:15.813223 kernel: IPI shorthand broadcast: enabled Sep 16 04:53:15.813228 kernel: sched_clock: Marking stable (2521010551, 135272308)->(2672681503, -16398644) Sep 16 04:53:15.813233 kernel: registered taskstats version 1 Sep 16 04:53:15.813238 kernel: Loading compiled-in X.509 certificates Sep 16 04:53:15.813243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:53:15.813248 kernel: Demotion targets for Node 0: null Sep 16 04:53:15.813253 kernel: Key type .fscrypt registered Sep 16 04:53:15.813258 kernel: Key type fscrypt-provisioning registered Sep 16 04:53:15.813263 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:53:15.813269 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:53:15.813283 kernel: ima: No architecture policies found Sep 16 04:53:15.813289 kernel: clk: Disabling unused clocks Sep 16 04:53:15.813294 kernel: Warning: unable to open an initial console. Sep 16 04:53:15.813299 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:53:15.813304 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:53:15.813313 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:53:15.813321 kernel: Run /init as init process Sep 16 04:53:15.813326 kernel: with arguments: Sep 16 04:53:15.813333 kernel: /init Sep 16 04:53:15.813337 kernel: with environment: Sep 16 04:53:15.813342 kernel: HOME=/ Sep 16 04:53:15.813347 kernel: TERM=linux Sep 16 04:53:15.813352 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:53:15.813359 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:53:15.813366 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:15.813373 systemd[1]: Detected virtualization kvm. Sep 16 04:53:15.813378 systemd[1]: Detected architecture x86-64. Sep 16 04:53:15.813383 systemd[1]: Running in initrd. Sep 16 04:53:15.813390 systemd[1]: No hostname configured, using default hostname. Sep 16 04:53:15.813399 systemd[1]: Hostname set to . Sep 16 04:53:15.813407 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:15.813416 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:53:15.813421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:15.813427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:15.813434 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:53:15.813439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:15.813444 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:53:15.813450 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:53:15.813456 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:53:15.813461 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:53:15.813468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:15.813473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:15.813478 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:15.813483 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:15.813488 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:15.813494 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:15.813499 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:15.813504 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:15.813509 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:53:15.813516 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:53:15.813521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:15.813526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:15.813532 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:15.813537 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:15.813542 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:53:15.813547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:15.813553 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:53:15.813558 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:53:15.813564 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:53:15.813570 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:15.813575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:15.813595 systemd-journald[215]: Collecting audit messages is disabled. Sep 16 04:53:15.813612 systemd-journald[215]: Journal started Sep 16 04:53:15.813626 systemd-journald[215]: Runtime Journal (/run/log/journal/c49292a38cf74f5f88ac54fcab1701ff) is 4.8M, max 38.6M, 33.7M free. Sep 16 04:53:15.830225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:15.838542 systemd-modules-load[217]: Inserted module 'overlay' Sep 16 04:53:15.841322 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:15.844709 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:15.848041 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:15.850702 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:53:15.855668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:53:15.863373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:15.867762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:53:15.871391 kernel: Bridge firewalling registered Sep 16 04:53:15.870757 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 16 04:53:15.873076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:15.913771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:15.920405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:53:15.922025 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:53:15.925354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:15.931070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:53:15.932979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:15.935714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:15.942286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:15.945556 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:15.949890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:15.957252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:15.960271 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:53:15.978549 systemd-resolved[245]: Positive Trust Anchors: Sep 16 04:53:15.978560 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:15.978579 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:15.981313 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 16 04:53:15.983709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:15.984422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:15.986721 dracut-cmdline[256]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:16.078243 kernel: SCSI subsystem initialized Sep 16 04:53:16.085202 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:53:16.106264 kernel: iscsi: registered transport (tcp) Sep 16 04:53:16.140586 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:53:16.140664 kernel: QLogic iSCSI HBA Driver Sep 16 04:53:16.166671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:16.197343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:16.202391 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:16.266784 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:16.269755 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:53:16.341267 kernel: raid6: avx2x4 gen() 18338 MB/s Sep 16 04:53:16.359324 kernel: raid6: avx2x2 gen() 19718 MB/s Sep 16 04:53:16.378507 kernel: raid6: avx2x1 gen() 21190 MB/s Sep 16 04:53:16.378566 kernel: raid6: using algorithm avx2x1 gen() 21190 MB/s Sep 16 04:53:16.396364 kernel: raid6: .... xor() 30538 MB/s, rmw enabled Sep 16 04:53:16.396419 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:53:16.411220 kernel: xor: automatically using best checksumming function avx Sep 16 04:53:16.513227 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:53:16.518167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:16.521809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:16.543226 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 16 04:53:16.546167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:16.551535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:53:16.563081 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 16 04:53:16.582566 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:16.585411 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:16.617454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:16.622153 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:53:16.704224 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:53:16.709396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:16.709480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:16.713413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.718306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.752737 kernel: ACPI: bus type USB registered Sep 16 04:53:16.752800 kernel: usbcore: registered new interface driver usbfs Sep 16 04:53:16.752819 kernel: usbcore: registered new interface driver hub Sep 16 04:53:16.753872 kernel: usbcore: registered new device driver usb Sep 16 04:53:16.772349 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:53:16.772657 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 16 04:53:16.775261 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 16 04:53:16.776353 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:53:16.779382 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 16 04:53:16.779644 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 16 04:53:16.787223 kernel: hub 1-0:1.0: USB hub found Sep 16 04:53:16.789219 kernel: hub 1-0:1.0: 4 ports detected Sep 16 04:53:16.798224 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:53:16.798291 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 16 04:53:16.799293 kernel: AES CTR mode by8 optimization enabled Sep 16 04:53:16.800780 kernel: hub 2-0:1.0: USB hub found Sep 16 04:53:16.801024 kernel: hub 2-0:1.0: 4 ports detected Sep 16 04:53:16.813317 kernel: libata version 3.00 loaded. Sep 16 04:53:16.814234 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Sep 16 04:53:16.819201 kernel: scsi host0: Virtio SCSI HBA Sep 16 04:53:16.829195 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 16 04:53:16.851246 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 04:53:16.851399 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 04:53:16.851409 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 04:53:16.851467 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 04:53:16.851518 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 04:53:16.853194 kernel: scsi host1: ahci Sep 16 04:53:16.855192 kernel: scsi host2: ahci Sep 16 04:53:16.855306 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 16 04:53:16.855375 kernel: scsi host3: ahci Sep 16 04:53:16.855428 kernel: scsi host4: ahci Sep 16 04:53:16.855491 kernel: scsi host5: ahci Sep 16 04:53:16.855573 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 16 04:53:16.855647 kernel: scsi host6: ahci Sep 16 04:53:16.855698 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 16 04:53:16.855750 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 16 04:53:16.855801 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 lpm-pol 1 Sep 16 04:53:16.855807 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 lpm-pol 1 Sep 16 04:53:16.855813 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 lpm-pol 1 Sep 16 04:53:16.855819 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 lpm-pol 1 Sep 16 04:53:16.855825 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 lpm-pol 1 Sep 16 04:53:16.855832 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 lpm-pol 1 Sep 16 04:53:16.855838 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 04:53:16.862196 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:53:16.862218 kernel: GPT:17805311 != 80003071 Sep 16 04:53:16.862225 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:53:16.862232 kernel: GPT:17805311 != 80003071 Sep 16 04:53:16.862237 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:53:16.862243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:16.863194 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 16 04:53:16.893758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:17.039313 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 16 04:53:17.167902 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:17.168834 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:17.168876 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:17.178202 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 04:53:17.178267 kernel: ata1.00: LPM support broken, forcing max_power Sep 16 04:53:17.178300 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 04:53:17.180781 kernel: ata1.00: applying bridge limits Sep 16 04:53:17.186216 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:17.186267 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:17.193817 kernel: ata1.00: LPM support broken, forcing max_power Sep 16 04:53:17.193866 kernel: ata1.00: configured for UDMA/100 Sep 16 04:53:17.199218 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:53:17.202249 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:53:17.224708 kernel: usbcore: registered new interface driver usbhid Sep 16 04:53:17.224771 kernel: usbhid: USB HID core driver Sep 16 04:53:17.243724 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 16 04:53:17.243778 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 16 04:53:17.271728 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 04:53:17.272113 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:53:17.292214 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:53:17.299542 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 16 04:53:17.346236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:53:17.359561 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 16 04:53:17.369428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 16 04:53:17.370404 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 16 04:53:17.372907 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:53:17.385286 disk-uuid[636]: Primary Header is updated. Sep 16 04:53:17.385286 disk-uuid[636]: Secondary Entries is updated. Sep 16 04:53:17.385286 disk-uuid[636]: Secondary Header is updated. Sep 16 04:53:17.392227 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:17.562235 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:17.565300 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:17.567921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:17.569099 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:17.572608 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:53:17.598610 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:18.411516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:18.412790 disk-uuid[637]: The operation has completed successfully. Sep 16 04:53:18.485756 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:53:18.485909 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:53:18.540919 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:53:18.566439 sh[666]: Success Sep 16 04:53:18.596240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:53:18.596331 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:53:18.599022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:53:18.617372 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:53:18.673091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:53:18.679318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:53:18.693741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:53:18.708214 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (678) Sep 16 04:53:18.712317 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:53:18.712370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:18.720686 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:53:18.720754 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:53:18.720765 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:53:18.723417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:53:18.724870 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:18.726044 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:53:18.726700 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:53:18.729342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:53:18.762225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (715) Sep 16 04:53:18.764224 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:18.766227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:18.771216 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:18.771260 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:18.771271 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:18.777263 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:18.778300 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:53:18.781301 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:53:18.805984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:18.808482 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:18.838533 systemd-networkd[847]: lo: Link UP Sep 16 04:53:18.838541 systemd-networkd[847]: lo: Gained carrier Sep 16 04:53:18.839687 systemd-networkd[847]: Enumeration completed Sep 16 04:53:18.839763 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:18.840027 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:18.840030 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:18.840482 systemd-networkd[847]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:18.840484 systemd-networkd[847]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:18.840796 systemd-networkd[847]: eth0: Link UP Sep 16 04:53:18.840993 systemd-networkd[847]: eth1: Link UP Sep 16 04:53:18.841094 systemd-networkd[847]: eth0: Gained carrier Sep 16 04:53:18.841101 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:18.842883 systemd[1]: Reached target network.target - Network. Sep 16 04:53:18.847999 systemd-networkd[847]: eth1: Gained carrier Sep 16 04:53:18.848015 systemd-networkd[847]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:18.876265 systemd-networkd[847]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:53:18.882481 ignition[794]: Ignition 2.22.0 Sep 16 04:53:18.882490 ignition[794]: Stage: fetch-offline Sep 16 04:53:18.882517 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:18.882521 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:18.882576 ignition[794]: parsed url from cmdline: "" Sep 16 04:53:18.882578 ignition[794]: no config URL provided Sep 16 04:53:18.882581 ignition[794]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:53:18.885101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:18.882584 ignition[794]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:53:18.882587 ignition[794]: failed to fetch config: resource requires networking Sep 16 04:53:18.882730 ignition[794]: Ignition finished successfully Sep 16 04:53:18.886847 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:53:18.901227 systemd-networkd[847]: eth0: DHCPv4 address 37.27.203.193/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:53:18.912985 ignition[856]: Ignition 2.22.0 Sep 16 04:53:18.912994 ignition[856]: Stage: fetch Sep 16 04:53:18.913087 ignition[856]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:18.913092 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:18.913162 ignition[856]: parsed url from cmdline: "" Sep 16 04:53:18.913165 ignition[856]: no config URL provided Sep 16 04:53:18.913169 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:53:18.913227 ignition[856]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:53:18.913257 ignition[856]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 16 04:53:18.921081 ignition[856]: GET result: OK Sep 16 04:53:18.921140 ignition[856]: parsing config with SHA512: e33df0ef1b252683e04563d6cacea946f0454f9f2c6713185822ddd816c9a8f6f5e398464118ffb91394f49477e0e03f27146758a2db1caf4fc09958aecb322d Sep 16 04:53:18.924479 unknown[856]: fetched base config from "system" Sep 16 04:53:18.924487 unknown[856]: fetched base config from "system" Sep 16 04:53:18.924859 ignition[856]: fetch: fetch complete Sep 16 04:53:18.924492 unknown[856]: fetched user config from "hetzner" Sep 16 04:53:18.924863 ignition[856]: fetch: fetch passed Sep 16 04:53:18.927595 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:53:18.924891 ignition[856]: Ignition finished successfully Sep 16 04:53:18.930285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:53:18.957774 ignition[862]: Ignition 2.22.0 Sep 16 04:53:18.957786 ignition[862]: Stage: kargs Sep 16 04:53:18.957909 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:18.957917 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:18.959827 ignition[862]: kargs: kargs passed Sep 16 04:53:18.961925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:53:18.959868 ignition[862]: Ignition finished successfully Sep 16 04:53:18.964026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:53:18.992972 ignition[869]: Ignition 2.22.0 Sep 16 04:53:18.992984 ignition[869]: Stage: disks Sep 16 04:53:18.993318 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:18.993324 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:18.993733 ignition[869]: disks: disks passed Sep 16 04:53:18.993757 ignition[869]: Ignition finished successfully Sep 16 04:53:18.996142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:53:18.997019 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:18.997910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:53:18.999341 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:18.999699 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:19.000077 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:19.001161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:53:19.022723 systemd-fsck[877]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 04:53:19.025099 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:53:19.026926 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:53:19.124301 kernel: EXT4-fs (sda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:53:19.124106 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:53:19.124955 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:19.128239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:19.131245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:53:19.133568 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:53:19.136137 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:53:19.138208 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:19.144603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:53:19.148319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:53:19.160209 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (885) Sep 16 04:53:19.166260 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:19.166352 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:19.176308 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:19.176371 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:19.176386 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:19.181866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:19.223291 coreos-metadata[887]: Sep 16 04:53:19.223 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 16 04:53:19.226015 coreos-metadata[887]: Sep 16 04:53:19.225 INFO Fetch successful Sep 16 04:53:19.228256 coreos-metadata[887]: Sep 16 04:53:19.227 INFO wrote hostname ci-4459-0-0-n-26104e5955 to /sysroot/etc/hostname Sep 16 04:53:19.231036 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:53:19.235761 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:53:19.242746 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:53:19.248409 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:53:19.253447 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:53:19.363483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:19.366359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:53:19.369382 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:53:19.386203 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:19.398952 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:53:19.407723 ignition[1002]: INFO : Ignition 2.22.0 Sep 16 04:53:19.407723 ignition[1002]: INFO : Stage: mount Sep 16 04:53:19.409637 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:19.409637 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:19.409637 ignition[1002]: INFO : mount: mount passed Sep 16 04:53:19.409637 ignition[1002]: INFO : Ignition finished successfully Sep 16 04:53:19.409444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:53:19.411244 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:53:19.706626 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:53:19.708428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:19.740212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1014) Sep 16 04:53:19.740312 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:19.743095 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:19.753475 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:19.753535 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:19.757666 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:19.761226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:19.796926 ignition[1030]: INFO : Ignition 2.22.0 Sep 16 04:53:19.796926 ignition[1030]: INFO : Stage: files Sep 16 04:53:19.799363 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:19.799363 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:19.799363 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:53:19.803602 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:53:19.803602 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:53:19.807096 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:53:19.808616 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:53:19.810390 unknown[1030]: wrote ssh authorized keys file for user: core Sep 16 04:53:19.811771 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:53:19.813752 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:53:19.815770 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 16 04:53:19.971642 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:53:20.019442 systemd-networkd[847]: eth0: Gained IPv6LL Sep 16 04:53:20.339468 systemd-networkd[847]: eth1: Gained IPv6LL Sep 16 04:53:20.901089 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:53:20.901089 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:53:20.905774 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 04:53:21.167687 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:53:21.232701 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:53:21.232701 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:21.237202 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:21.250239 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:21.250239 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:21.250239 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:21.250239 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:21.250239 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 16 04:53:21.592140 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:53:21.817244 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:21.819432 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:53:21.819432 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:21.822818 ignition[1030]: INFO : files: files passed Sep 16 04:53:21.822818 ignition[1030]: INFO : Ignition finished successfully Sep 16 04:53:21.827858 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:53:21.834359 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:53:21.849895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:53:21.855325 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:53:21.861520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:53:21.874958 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:21.874958 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:21.880355 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:21.877519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:21.879051 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:53:21.883313 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:53:21.953518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:53:21.953650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:53:21.956696 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:53:21.959042 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:53:21.961687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:53:21.963381 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:53:22.001466 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:22.005920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:53:22.034650 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:22.037437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:22.038769 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:53:22.040980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:53:22.041233 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:22.043876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:53:22.045420 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:53:22.047733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:53:22.049937 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:22.051975 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:22.054405 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:22.056815 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:53:22.058947 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:22.061055 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:53:22.063069 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:53:22.065453 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:53:22.067293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:53:22.067439 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:22.069867 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:22.071321 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:22.073122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:53:22.074267 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:22.076531 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:53:22.076711 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:22.087698 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:53:22.087918 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:22.091021 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:53:22.091391 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:53:22.093498 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:53:22.093828 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:53:22.099415 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:53:22.103769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:53:22.105423 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:22.110573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:53:22.111861 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:53:22.112433 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:22.117244 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:53:22.117558 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:22.130345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:53:22.130501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:53:22.148216 ignition[1085]: INFO : Ignition 2.22.0 Sep 16 04:53:22.148216 ignition[1085]: INFO : Stage: umount Sep 16 04:53:22.148216 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:22.148216 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:22.157043 ignition[1085]: INFO : umount: umount passed Sep 16 04:53:22.157043 ignition[1085]: INFO : Ignition finished successfully Sep 16 04:53:22.153745 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:53:22.156174 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:53:22.156410 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:53:22.157829 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:53:22.157926 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:53:22.161344 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:53:22.161414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:53:22.162966 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:53:22.163019 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:53:22.164666 systemd[1]: Stopped target network.target - Network. Sep 16 04:53:22.166316 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:53:22.166386 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:22.168093 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:53:22.169783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:53:22.170235 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:22.171609 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:53:22.173299 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:53:22.175039 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:53:22.175086 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:22.176934 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:53:22.176975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:22.178897 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:53:22.178965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:53:22.180823 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:53:22.180877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:22.183146 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:53:22.185433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:53:22.187850 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:53:22.187971 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:53:22.189269 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:53:22.189413 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:53:22.194717 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:53:22.195628 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:53:22.195725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:22.197781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:53:22.197853 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:22.202626 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:22.202936 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:53:22.203063 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:53:22.206490 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:53:22.207037 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:53:22.209263 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:53:22.209328 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:22.212503 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:53:22.214684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:53:22.214759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:22.219322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:53:22.219393 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:22.223529 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:53:22.223586 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:22.224780 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:22.233436 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:53:22.242104 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:53:22.247595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:22.249226 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:53:22.249297 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:22.251240 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:53:22.251301 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:22.253372 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:53:22.253435 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:22.256749 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:53:22.256808 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:22.259216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:53:22.259292 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:22.262656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:53:22.264607 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:53:22.264687 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:22.269645 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:53:22.269707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:22.273330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:22.273392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:22.278253 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:53:22.278375 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:53:22.284776 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:53:22.284882 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:53:22.292703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:53:22.296341 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:53:22.314905 systemd[1]: Switching root. Sep 16 04:53:22.352322 systemd-journald[215]: Journal stopped Sep 16 04:53:23.312268 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Sep 16 04:53:23.312339 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:53:23.312350 kernel: SELinux: policy capability open_perms=1 Sep 16 04:53:23.312358 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:53:23.312369 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:53:23.312379 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:53:23.312386 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:53:23.312394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:53:23.312404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:53:23.312415 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:53:23.312426 kernel: audit: type=1403 audit(1757998402.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:53:23.312436 systemd[1]: Successfully loaded SELinux policy in 52.874ms. Sep 16 04:53:23.312447 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.510ms. Sep 16 04:53:23.312456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:23.312465 systemd[1]: Detected virtualization kvm. Sep 16 04:53:23.312473 systemd[1]: Detected architecture x86-64. Sep 16 04:53:23.312482 systemd[1]: Detected first boot. Sep 16 04:53:23.312492 systemd[1]: Hostname set to . Sep 16 04:53:23.312502 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:23.312510 zram_generator::config[1128]: No configuration found. Sep 16 04:53:23.312522 kernel: Guest personality initialized and is inactive Sep 16 04:53:23.312530 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:53:23.312537 kernel: Initialized host personality Sep 16 04:53:23.312545 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:53:23.312553 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:53:23.312564 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:53:23.312573 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:53:23.312581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:53:23.312589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:53:23.312597 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:53:23.312606 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:53:23.312614 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:53:23.312647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:53:23.312663 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:53:23.312676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:53:23.312686 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:53:23.312701 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:53:23.312712 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:23.312723 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:23.312734 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:53:23.312743 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:53:23.312752 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:53:23.312761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:23.312770 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:53:23.312778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:23.312788 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:23.312796 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:53:23.312805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:53:23.312814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:23.312822 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:53:23.312831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:23.312839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:23.312848 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:23.312857 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:23.312869 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:53:23.312881 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:53:23.316199 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:53:23.316238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:23.316249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:23.316258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:23.316268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:53:23.316291 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:53:23.316310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:53:23.316324 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:53:23.316342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:23.316356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:53:23.316364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:53:23.316373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:53:23.316384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:53:23.316398 systemd[1]: Reached target machines.target - Containers. Sep 16 04:53:23.316410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:53:23.316420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:23.316430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:23.316439 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:53:23.316448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:23.316456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:23.316466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:23.316475 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:53:23.316483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:23.316492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:53:23.316502 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:53:23.316512 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:53:23.316521 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:53:23.316530 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:53:23.316539 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:23.316548 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:23.316557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:23.316566 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:23.316574 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:53:23.316584 kernel: fuse: init (API version 7.41) Sep 16 04:53:23.316594 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:53:23.316603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:23.316612 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:53:23.316620 systemd[1]: Stopped verity-setup.service. Sep 16 04:53:23.316629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:23.316640 kernel: ACPI: bus type drm_connector registered Sep 16 04:53:23.316649 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:53:23.316659 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:53:23.316671 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:53:23.316680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:53:23.316688 kernel: loop: module loaded Sep 16 04:53:23.316696 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:53:23.316705 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:53:23.316735 systemd-journald[1205]: Collecting audit messages is disabled. Sep 16 04:53:23.316752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:23.316763 systemd-journald[1205]: Journal started Sep 16 04:53:23.316779 systemd-journald[1205]: Runtime Journal (/run/log/journal/c49292a38cf74f5f88ac54fcab1701ff) is 4.8M, max 38.6M, 33.7M free. Sep 16 04:53:23.042064 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:53:23.055331 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:53:23.055784 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:53:23.319651 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:23.322884 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:53:23.323034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:53:23.323967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:23.324126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:23.324898 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:23.325011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:23.326506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:23.326641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:23.327465 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:53:23.327773 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:53:23.329494 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:23.329608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:23.330756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:23.332620 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:53:23.333455 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:53:23.342660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:23.349784 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:23.353352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:53:23.357267 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:53:23.357854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:53:23.357881 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:23.360125 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:53:23.368333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:53:23.368908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:23.370706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:53:23.372401 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:53:23.374270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:23.376340 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:53:23.378578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:23.379338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:23.389484 systemd-journald[1205]: Time spent on flushing to /var/log/journal/c49292a38cf74f5f88ac54fcab1701ff is 26.664ms for 1162 entries. Sep 16 04:53:23.389484 systemd-journald[1205]: System Journal (/var/log/journal/c49292a38cf74f5f88ac54fcab1701ff) is 8M, max 584.8M, 576.8M free. Sep 16 04:53:23.436535 systemd-journald[1205]: Received client request to flush runtime journal. Sep 16 04:53:23.436578 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 04:53:23.382276 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:53:23.384315 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:53:23.386738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:23.388940 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:53:23.390973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:53:23.395306 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:53:23.396086 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:53:23.396935 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:53:23.402691 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:53:23.436433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:23.437822 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:53:23.455536 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:53:23.469213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:53:23.471588 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:53:23.473163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:23.489210 kernel: loop1: detected capacity change from 0 to 128016 Sep 16 04:53:23.502073 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 16 04:53:23.502091 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 16 04:53:23.505775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:23.519802 kernel: loop2: detected capacity change from 0 to 8 Sep 16 04:53:23.535267 kernel: loop3: detected capacity change from 0 to 224512 Sep 16 04:53:23.579212 kernel: loop4: detected capacity change from 0 to 110984 Sep 16 04:53:23.594235 kernel: loop5: detected capacity change from 0 to 128016 Sep 16 04:53:23.609241 kernel: loop6: detected capacity change from 0 to 8 Sep 16 04:53:23.612216 kernel: loop7: detected capacity change from 0 to 224512 Sep 16 04:53:23.634568 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 16 04:53:23.634874 (sd-merge)[1277]: Merged extensions into '/usr'. Sep 16 04:53:23.639802 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:53:23.639914 systemd[1]: Reloading... Sep 16 04:53:23.698392 zram_generator::config[1302]: No configuration found. Sep 16 04:53:23.860841 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:53:23.874419 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:53:23.874488 systemd[1]: Reloading finished in 234 ms. Sep 16 04:53:23.886167 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:53:23.887030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:53:23.887856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:53:23.899155 systemd[1]: Starting ensure-sysext.service... Sep 16 04:53:23.901294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:23.902760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:23.913549 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:53:23.913648 systemd[1]: Reloading... Sep 16 04:53:23.927469 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:53:23.927712 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:53:23.927899 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:53:23.928070 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:53:23.928550 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:53:23.928746 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Sep 16 04:53:23.928809 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Sep 16 04:53:23.931555 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:23.931561 systemd-tmpfiles[1348]: Skipping /boot Sep 16 04:53:23.931909 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Sep 16 04:53:23.935161 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:23.935232 systemd-tmpfiles[1348]: Skipping /boot Sep 16 04:53:23.990211 zram_generator::config[1393]: No configuration found. Sep 16 04:53:24.122297 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:53:24.146201 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 16 04:53:24.159208 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:53:24.161058 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:53:24.161706 systemd[1]: Reloading finished in 247 ms. Sep 16 04:53:24.168615 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:24.170444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:24.192337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:24.195467 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:53:24.198970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:53:24.202455 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:24.204504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:24.207319 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:53:24.208782 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 16 04:53:24.211471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.211590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:24.214361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:24.217680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:24.219761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:24.221301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:24.221378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:24.221435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.222794 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.222889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:24.222975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:24.223017 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:24.225838 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:53:24.226271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.230529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.230621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:24.230706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:24.230777 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:24.230837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.236090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.236827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:24.242013 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:24.243608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:24.243713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:24.243825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:24.267689 systemd[1]: Finished ensure-sysext.service. Sep 16 04:53:24.268769 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:53:24.270823 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:24.270948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:24.275258 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:53:24.281453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:24.283911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:24.284642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:53:24.285298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:24.285391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:24.286917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:24.286956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:24.288257 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:53:24.288844 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:24.289093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:24.326311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 04:53:24.326518 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 04:53:24.340388 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:53:24.357480 augenrules[1517]: No rules Sep 16 04:53:24.358348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:53:24.359536 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:53:24.364573 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:24.364741 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:24.365373 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:53:24.369488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:53:24.370261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:53:24.396687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:53:24.398306 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:53:24.406859 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 16 04:53:24.406922 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 16 04:53:24.411194 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:53:24.412959 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 16 04:53:24.413035 kernel: [drm] features: -context_init Sep 16 04:53:24.416197 kernel: [drm] number of scanouts: 1 Sep 16 04:53:24.416263 kernel: [drm] number of cap sets: 0 Sep 16 04:53:24.419259 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Sep 16 04:53:24.422596 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 16 04:53:24.422642 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:53:24.428208 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 16 04:53:24.442887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:24.461330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:24.461495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:24.463884 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:24.470307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:24.521046 systemd-resolved[1465]: Positive Trust Anchors: Sep 16 04:53:24.521320 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:24.521372 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:24.526476 systemd-resolved[1465]: Using system hostname 'ci-4459-0-0-n-26104e5955'. Sep 16 04:53:24.529491 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:24.529813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:24.538499 systemd-networkd[1464]: lo: Link UP Sep 16 04:53:24.538508 systemd-networkd[1464]: lo: Gained carrier Sep 16 04:53:24.540789 systemd-networkd[1464]: Enumeration completed Sep 16 04:53:24.541217 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:24.541402 systemd[1]: Reached target network.target - Network. Sep 16 04:53:24.542993 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:24.543050 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:24.543916 systemd-networkd[1464]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:24.543993 systemd-networkd[1464]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:24.544310 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:53:24.544606 systemd-networkd[1464]: eth0: Link UP Sep 16 04:53:24.544747 systemd-networkd[1464]: eth0: Gained carrier Sep 16 04:53:24.544765 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:24.546465 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:53:24.547612 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:53:24.549322 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:53:24.552226 systemd-networkd[1464]: eth1: Link UP Sep 16 04:53:24.553337 systemd-networkd[1464]: eth1: Gained carrier Sep 16 04:53:24.553364 systemd-networkd[1464]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:24.555402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:24.555793 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:24.558059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:53:24.558838 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:53:24.558944 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:53:24.559959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:53:24.560404 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:53:24.560453 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:53:24.560490 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:53:24.560507 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:24.560537 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:24.562202 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:53:24.563884 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:53:24.568688 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:53:24.571790 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:53:24.572086 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:53:24.575244 systemd-networkd[1464]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:53:24.576109 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 16 04:53:24.581045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:53:24.581938 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:53:24.585998 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:53:24.588318 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:53:24.592344 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:24.596634 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:24.599319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:24.599353 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:24.600544 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:53:24.610585 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:53:24.616259 systemd-networkd[1464]: eth0: DHCPv4 address 37.27.203.193/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:53:24.616363 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:53:24.618238 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 16 04:53:24.621269 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:53:24.623664 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:53:24.627755 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:53:24.628103 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:53:24.630378 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:53:24.632665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:53:24.642413 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:53:24.645678 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 16 04:53:24.649395 coreos-metadata[1554]: Sep 16 04:53:24.645 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 16 04:53:24.652112 coreos-metadata[1554]: Sep 16 04:53:24.650 INFO Fetch successful Sep 16 04:53:24.652112 coreos-metadata[1554]: Sep 16 04:53:24.650 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 16 04:53:24.651346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:53:24.653260 jq[1559]: false Sep 16 04:53:24.653412 coreos-metadata[1554]: Sep 16 04:53:24.652 INFO Fetch successful Sep 16 04:53:24.663902 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 16 04:53:24.666041 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 16 04:53:24.666041 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 16 04:53:24.656418 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:53:24.666041 oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 16 04:53:24.666349 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:24.666349 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 16 04:53:24.659014 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:53:24.666057 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:24.663854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:53:24.666092 oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 16 04:53:24.664372 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:53:24.667038 oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 16 04:53:24.672425 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 16 04:53:24.672425 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:24.666861 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:53:24.667046 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:24.674094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:53:24.680383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:53:24.681884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:53:24.682373 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:53:24.682571 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:53:24.683246 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:53:24.687471 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:53:24.688003 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:53:24.726903 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:53:24.728122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:53:24.736386 extend-filesystems[1560]: Found /dev/sda6 Sep 16 04:53:24.746818 update_engine[1568]: I20250916 04:53:24.734669 1568 main.cc:92] Flatcar Update Engine starting Sep 16 04:53:24.747271 jq[1569]: true Sep 16 04:53:24.751663 tar[1575]: linux-amd64/LICENSE Sep 16 04:53:24.752696 tar[1575]: linux-amd64/helm Sep 16 04:53:24.753138 extend-filesystems[1560]: Found /dev/sda9 Sep 16 04:53:24.758856 systemd-logind[1567]: New seat seat0. Sep 16 04:53:24.759566 systemd-logind[1567]: Watching system buttons on /dev/input/event3 (Power Button) Sep 16 04:53:24.759578 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:53:24.759721 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:53:24.760998 extend-filesystems[1560]: Checking size of /dev/sda9 Sep 16 04:53:24.761119 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:53:24.775601 jq[1597]: true Sep 16 04:53:24.787553 dbus-daemon[1555]: [system] SELinux support is enabled Sep 16 04:53:24.787931 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:53:24.794159 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:53:24.795231 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:53:24.798964 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:53:24.798980 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:53:24.805201 extend-filesystems[1560]: Resized partition /dev/sda9 Sep 16 04:53:24.816674 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:53:24.822627 update_engine[1568]: I20250916 04:53:24.816859 1568 update_check_scheduler.cc:74] Next update check in 5m50s Sep 16 04:53:24.822681 extend-filesystems[1612]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:53:24.839446 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 16 04:53:24.853581 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:53:24.902543 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:53:24.908433 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:53:24.999047 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:53:24.999726 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:53:25.008819 systemd[1]: Starting sshkeys.service... Sep 16 04:53:25.017921 containerd[1595]: time="2025-09-16T04:53:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:53:25.019649 containerd[1595]: time="2025-09-16T04:53:25.019604345Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:53:25.036729 containerd[1595]: time="2025-09-16T04:53:25.036651612Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.46µs" Sep 16 04:53:25.036729 containerd[1595]: time="2025-09-16T04:53:25.036698452Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:53:25.036729 containerd[1595]: time="2025-09-16T04:53:25.036717482Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:53:25.036904 containerd[1595]: time="2025-09-16T04:53:25.036882842Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:53:25.036923 containerd[1595]: time="2025-09-16T04:53:25.036904312Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:53:25.036934 containerd[1595]: time="2025-09-16T04:53:25.036927452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:25.037010 containerd[1595]: time="2025-09-16T04:53:25.036983572Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:25.037010 containerd[1595]: time="2025-09-16T04:53:25.037002892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038425 containerd[1595]: time="2025-09-16T04:53:25.038351983Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038425 containerd[1595]: time="2025-09-16T04:53:25.038383523Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038425 containerd[1595]: time="2025-09-16T04:53:25.038396133Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038425 containerd[1595]: time="2025-09-16T04:53:25.038403773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038518 containerd[1595]: time="2025-09-16T04:53:25.038493743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038739 containerd[1595]: time="2025-09-16T04:53:25.038670613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038739 containerd[1595]: time="2025-09-16T04:53:25.038698633Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:25.038739 containerd[1595]: time="2025-09-16T04:53:25.038708663Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:53:25.038790 containerd[1595]: time="2025-09-16T04:53:25.038749383Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:53:25.039635 containerd[1595]: time="2025-09-16T04:53:25.039293483Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:53:25.039635 containerd[1595]: time="2025-09-16T04:53:25.039372133Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:53:25.050493 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:53:25.053790 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:53:25.061050 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:53:25.065999 containerd[1595]: time="2025-09-16T04:53:25.065948454Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066009095Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066019475Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066028705Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066037585Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066043965Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066055495Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:53:25.066054 containerd[1595]: time="2025-09-16T04:53:25.066063435Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:53:25.066157 containerd[1595]: time="2025-09-16T04:53:25.066071055Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:53:25.066157 containerd[1595]: time="2025-09-16T04:53:25.066077965Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:53:25.066157 containerd[1595]: time="2025-09-16T04:53:25.066084415Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:53:25.066157 containerd[1595]: time="2025-09-16T04:53:25.066096955Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:53:25.066462 containerd[1595]: time="2025-09-16T04:53:25.066431275Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:53:25.066491 containerd[1595]: time="2025-09-16T04:53:25.066478425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:53:25.066505 containerd[1595]: time="2025-09-16T04:53:25.066491705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:53:25.066505 containerd[1595]: time="2025-09-16T04:53:25.066501925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066509405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066516345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066526045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066533355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066553005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066560455Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066567235Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066627815Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066636675Z" level=info msg="Start snapshots syncer" Sep 16 04:53:25.066710 containerd[1595]: time="2025-09-16T04:53:25.066654255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:53:25.067060 containerd[1595]: time="2025-09-16T04:53:25.066875705Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:53:25.067060 containerd[1595]: time="2025-09-16T04:53:25.066918155Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:53:25.067152 containerd[1595]: time="2025-09-16T04:53:25.067075585Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:53:25.067890 containerd[1595]: time="2025-09-16T04:53:25.067863945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:53:25.067890 containerd[1595]: time="2025-09-16T04:53:25.067888715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:53:25.067945 containerd[1595]: time="2025-09-16T04:53:25.067897625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:53:25.067945 containerd[1595]: time="2025-09-16T04:53:25.067904945Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:53:25.067945 containerd[1595]: time="2025-09-16T04:53:25.067912885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:53:25.067945 containerd[1595]: time="2025-09-16T04:53:25.067933635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:53:25.067945 containerd[1595]: time="2025-09-16T04:53:25.067941575Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.067958765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.067965835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.067971975Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.067994815Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.068036475Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:25.068040 containerd[1595]: time="2025-09-16T04:53:25.068042555Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068048925Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068053875Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068060225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068067175Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068078175Z" level=info msg="runtime interface created" Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068081325Z" level=info msg="created NRI interface" Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068086585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:53:25.068155 containerd[1595]: time="2025-09-16T04:53:25.068094775Z" level=info msg="Connect containerd service" Sep 16 04:53:25.071881 containerd[1595]: time="2025-09-16T04:53:25.070586676Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:53:25.071881 containerd[1595]: time="2025-09-16T04:53:25.071260257Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:53:25.092102 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 16 04:53:25.092158 coreos-metadata[1647]: Sep 16 04:53:25.087 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 16 04:53:25.092158 coreos-metadata[1647]: Sep 16 04:53:25.088 INFO Fetch successful Sep 16 04:53:25.093591 unknown[1647]: wrote ssh authorized keys file for user: core Sep 16 04:53:25.094111 extend-filesystems[1612]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 04:53:25.094111 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 16 04:53:25.094111 extend-filesystems[1612]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 16 04:53:25.108117 extend-filesystems[1560]: Resized filesystem in /dev/sda9 Sep 16 04:53:25.095349 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:53:25.095502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:53:25.145209 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:53:25.145805 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:53:25.148624 systemd[1]: Finished sshkeys.service. Sep 16 04:53:25.188061 containerd[1595]: time="2025-09-16T04:53:25.188014875Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:53:25.188061 containerd[1595]: time="2025-09-16T04:53:25.188065665Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:53:25.188222 containerd[1595]: time="2025-09-16T04:53:25.188087135Z" level=info msg="Start subscribing containerd event" Sep 16 04:53:25.188222 containerd[1595]: time="2025-09-16T04:53:25.188106345Z" level=info msg="Start recovering state" Sep 16 04:53:25.190154 containerd[1595]: time="2025-09-16T04:53:25.190130576Z" level=info msg="Start event monitor" Sep 16 04:53:25.190154 containerd[1595]: time="2025-09-16T04:53:25.190154886Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:53:25.190249 containerd[1595]: time="2025-09-16T04:53:25.190160946Z" level=info msg="Start streaming server" Sep 16 04:53:25.190249 containerd[1595]: time="2025-09-16T04:53:25.190171136Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:53:25.190906 containerd[1595]: time="2025-09-16T04:53:25.190884407Z" level=info msg="runtime interface starting up..." Sep 16 04:53:25.190906 containerd[1595]: time="2025-09-16T04:53:25.190902227Z" level=info msg="starting plugins..." Sep 16 04:53:25.190939 containerd[1595]: time="2025-09-16T04:53:25.190921487Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:53:25.192671 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:53:25.192854 containerd[1595]: time="2025-09-16T04:53:25.192833047Z" level=info msg="containerd successfully booted in 0.176111s" Sep 16 04:53:25.247333 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:53:25.261964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:53:25.267377 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:53:25.275931 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:53:25.276073 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:53:25.277979 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:53:25.296120 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:53:25.303423 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:53:25.308976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:53:25.313369 tar[1575]: linux-amd64/README.md Sep 16 04:53:25.313426 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:53:25.334197 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:53:26.227478 systemd-networkd[1464]: eth1: Gained IPv6LL Sep 16 04:53:26.228244 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 16 04:53:26.230943 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:53:26.233340 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:53:26.236815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:26.241484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:53:26.271889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:53:26.355595 systemd-networkd[1464]: eth0: Gained IPv6LL Sep 16 04:53:26.356899 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 16 04:53:27.519328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:27.521977 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:53:27.525923 systemd[1]: Startup finished in 2.576s (kernel) + 6.906s (initrd) + 5.053s (userspace) = 14.535s. Sep 16 04:53:27.530595 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:28.253036 kubelet[1704]: E0916 04:53:28.252910 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:28.256219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:28.256489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:28.257241 systemd[1]: kubelet.service: Consumed 1.419s CPU time, 265.9M memory peak. Sep 16 04:53:31.435391 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:53:31.437116 systemd[1]: Started sshd@0-37.27.203.193:22-139.178.89.65:44398.service - OpenSSH per-connection server daemon (139.178.89.65:44398). Sep 16 04:53:32.544230 sshd[1716]: Accepted publickey for core from 139.178.89.65 port 44398 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:32.545457 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:32.554860 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:53:32.556861 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:53:32.569247 systemd-logind[1567]: New session 1 of user core. Sep 16 04:53:32.577076 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:53:32.580381 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:53:32.600414 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:53:32.603816 systemd-logind[1567]: New session c1 of user core. Sep 16 04:53:32.747269 systemd[1721]: Queued start job for default target default.target. Sep 16 04:53:32.757796 systemd[1721]: Created slice app.slice - User Application Slice. Sep 16 04:53:32.757814 systemd[1721]: Reached target paths.target - Paths. Sep 16 04:53:32.757838 systemd[1721]: Reached target timers.target - Timers. Sep 16 04:53:32.758712 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:53:32.776637 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:53:32.776772 systemd[1721]: Reached target sockets.target - Sockets. Sep 16 04:53:32.776837 systemd[1721]: Reached target basic.target - Basic System. Sep 16 04:53:32.776891 systemd[1721]: Reached target default.target - Main User Target. Sep 16 04:53:32.776923 systemd[1721]: Startup finished in 165ms. Sep 16 04:53:32.777414 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:53:32.787535 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:53:33.519636 systemd[1]: Started sshd@1-37.27.203.193:22-139.178.89.65:44406.service - OpenSSH per-connection server daemon (139.178.89.65:44406). Sep 16 04:53:34.516940 sshd[1732]: Accepted publickey for core from 139.178.89.65 port 44406 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:34.518717 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:34.525788 systemd-logind[1567]: New session 2 of user core. Sep 16 04:53:34.533435 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:53:35.190614 sshd[1735]: Connection closed by 139.178.89.65 port 44406 Sep 16 04:53:35.191447 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:35.195926 systemd[1]: sshd@1-37.27.203.193:22-139.178.89.65:44406.service: Deactivated successfully. Sep 16 04:53:35.198033 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:53:35.199001 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:53:35.200938 systemd-logind[1567]: Removed session 2. Sep 16 04:53:35.391245 systemd[1]: Started sshd@2-37.27.203.193:22-139.178.89.65:44416.service - OpenSSH per-connection server daemon (139.178.89.65:44416). Sep 16 04:53:36.480486 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 44416 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:36.482146 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:36.489522 systemd-logind[1567]: New session 3 of user core. Sep 16 04:53:36.497460 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:53:37.216029 sshd[1744]: Connection closed by 139.178.89.65 port 44416 Sep 16 04:53:37.216768 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:37.221111 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:53:37.221928 systemd[1]: sshd@2-37.27.203.193:22-139.178.89.65:44416.service: Deactivated successfully. Sep 16 04:53:37.223762 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:53:37.226099 systemd-logind[1567]: Removed session 3. Sep 16 04:53:37.400671 systemd[1]: Started sshd@3-37.27.203.193:22-139.178.89.65:44420.service - OpenSSH per-connection server daemon (139.178.89.65:44420). Sep 16 04:53:38.305379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:53:38.307732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:38.442758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:38.444675 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:38.479982 kubelet[1761]: E0916 04:53:38.479902 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:38.483983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:38.484479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:38.485150 systemd[1]: kubelet.service: Consumed 146ms CPU time, 111.1M memory peak. Sep 16 04:53:38.490838 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 44420 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:38.492450 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:38.499266 systemd-logind[1567]: New session 4 of user core. Sep 16 04:53:38.504392 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:53:39.235174 sshd[1768]: Connection closed by 139.178.89.65 port 44420 Sep 16 04:53:39.235975 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:39.240124 systemd[1]: sshd@3-37.27.203.193:22-139.178.89.65:44420.service: Deactivated successfully. Sep 16 04:53:39.242716 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:53:39.245359 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:53:39.246915 systemd-logind[1567]: Removed session 4. Sep 16 04:53:39.421536 systemd[1]: Started sshd@4-37.27.203.193:22-139.178.89.65:44432.service - OpenSSH per-connection server daemon (139.178.89.65:44432). Sep 16 04:53:40.521773 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 44432 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:40.523341 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:40.529842 systemd-logind[1567]: New session 5 of user core. Sep 16 04:53:40.536470 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:53:41.101121 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:53:41.101419 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:41.121543 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:41.297790 sshd[1777]: Connection closed by 139.178.89.65 port 44432 Sep 16 04:53:41.298595 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:41.301862 systemd[1]: sshd@4-37.27.203.193:22-139.178.89.65:44432.service: Deactivated successfully. Sep 16 04:53:41.303400 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:53:41.304460 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:53:41.305559 systemd-logind[1567]: Removed session 5. Sep 16 04:53:41.458671 systemd[1]: Started sshd@5-37.27.203.193:22-139.178.89.65:43868.service - OpenSSH per-connection server daemon (139.178.89.65:43868). Sep 16 04:53:42.438064 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 43868 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:42.439968 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:42.447141 systemd-logind[1567]: New session 6 of user core. Sep 16 04:53:42.453406 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:53:42.954890 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:53:42.955085 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:42.960269 sudo[1789]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:42.967343 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:53:42.967695 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:42.980536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:43.029510 augenrules[1811]: No rules Sep 16 04:53:43.030936 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:43.031296 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:43.033819 sudo[1788]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:43.191820 sshd[1787]: Connection closed by 139.178.89.65 port 43868 Sep 16 04:53:43.192649 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:43.197317 systemd[1]: sshd@5-37.27.203.193:22-139.178.89.65:43868.service: Deactivated successfully. Sep 16 04:53:43.199611 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:53:43.201592 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:53:43.203933 systemd-logind[1567]: Removed session 6. Sep 16 04:53:43.373662 systemd[1]: Started sshd@6-37.27.203.193:22-139.178.89.65:43884.service - OpenSSH per-connection server daemon (139.178.89.65:43884). Sep 16 04:53:44.369205 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 43884 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:44.370898 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:44.378729 systemd-logind[1567]: New session 7 of user core. Sep 16 04:53:44.397515 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:53:44.885121 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:53:44.885355 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:45.313612 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:53:45.331636 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:53:45.679515 dockerd[1842]: time="2025-09-16T04:53:45.679417920Z" level=info msg="Starting up" Sep 16 04:53:45.680960 dockerd[1842]: time="2025-09-16T04:53:45.680904660Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:53:45.694609 dockerd[1842]: time="2025-09-16T04:53:45.694535836Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:53:45.759135 dockerd[1842]: time="2025-09-16T04:53:45.758889623Z" level=info msg="Loading containers: start." Sep 16 04:53:45.773227 kernel: Initializing XFRM netlink socket Sep 16 04:53:46.035090 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 16 04:53:46.067264 systemd-timesyncd[1487]: Contacted time server 51.75.67.47:123 (2.flatcar.pool.ntp.org). Sep 16 04:53:46.067369 systemd-timesyncd[1487]: Initial clock synchronization to Tue 2025-09-16 04:53:46.298035 UTC. Sep 16 04:53:46.084465 systemd-networkd[1464]: docker0: Link UP Sep 16 04:53:46.090035 dockerd[1842]: time="2025-09-16T04:53:46.089973431Z" level=info msg="Loading containers: done." Sep 16 04:53:46.110076 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck74929392-merged.mount: Deactivated successfully. Sep 16 04:53:46.117894 dockerd[1842]: time="2025-09-16T04:53:46.117807672Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:53:46.118080 dockerd[1842]: time="2025-09-16T04:53:46.117919102Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:53:46.118080 dockerd[1842]: time="2025-09-16T04:53:46.118042362Z" level=info msg="Initializing buildkit" Sep 16 04:53:46.146880 dockerd[1842]: time="2025-09-16T04:53:46.146817054Z" level=info msg="Completed buildkit initialization" Sep 16 04:53:46.155215 dockerd[1842]: time="2025-09-16T04:53:46.154356118Z" level=info msg="Daemon has completed initialization" Sep 16 04:53:46.155215 dockerd[1842]: time="2025-09-16T04:53:46.154426828Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:53:46.157258 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:53:47.496736 containerd[1595]: time="2025-09-16T04:53:47.496637208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:53:47.983292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692794582.mount: Deactivated successfully. Sep 16 04:53:48.572463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:53:48.574769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:48.689613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:48.697833 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:48.770600 kubelet[2112]: E0916 04:53:48.770535 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:48.773076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:48.773301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:48.774026 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.8M memory peak. Sep 16 04:53:49.183065 containerd[1595]: time="2025-09-16T04:53:49.183011212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:49.184064 containerd[1595]: time="2025-09-16T04:53:49.184032887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Sep 16 04:53:49.185495 containerd[1595]: time="2025-09-16T04:53:49.184596032Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:49.192088 containerd[1595]: time="2025-09-16T04:53:49.192053672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:49.192589 containerd[1595]: time="2025-09-16T04:53:49.192563431Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.695865768s" Sep 16 04:53:49.192642 containerd[1595]: time="2025-09-16T04:53:49.192635153Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 16 04:53:49.193205 containerd[1595]: time="2025-09-16T04:53:49.193170783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:53:50.411325 containerd[1595]: time="2025-09-16T04:53:50.411266773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:50.412510 containerd[1595]: time="2025-09-16T04:53:50.412382677Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Sep 16 04:53:50.413522 containerd[1595]: time="2025-09-16T04:53:50.413495523Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:50.415702 containerd[1595]: time="2025-09-16T04:53:50.415678127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:50.416152 containerd[1595]: time="2025-09-16T04:53:50.416128313Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.222874762s" Sep 16 04:53:50.416203 containerd[1595]: time="2025-09-16T04:53:50.416154755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 16 04:53:50.416499 containerd[1595]: time="2025-09-16T04:53:50.416480438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:53:51.522944 containerd[1595]: time="2025-09-16T04:53:51.522894388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:51.523930 containerd[1595]: time="2025-09-16T04:53:51.523779689Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Sep 16 04:53:51.524700 containerd[1595]: time="2025-09-16T04:53:51.524670063Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:51.526505 containerd[1595]: time="2025-09-16T04:53:51.526475037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:51.527235 containerd[1595]: time="2025-09-16T04:53:51.527209994Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.110709295s" Sep 16 04:53:51.527235 containerd[1595]: time="2025-09-16T04:53:51.527235661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 16 04:53:51.528030 containerd[1595]: time="2025-09-16T04:53:51.527994791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:53:52.590539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825957368.mount: Deactivated successfully. Sep 16 04:53:53.110333 containerd[1595]: time="2025-09-16T04:53:53.110275936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:53.111620 containerd[1595]: time="2025-09-16T04:53:53.111438556Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Sep 16 04:53:53.112483 containerd[1595]: time="2025-09-16T04:53:53.112454669Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:53.114662 containerd[1595]: time="2025-09-16T04:53:53.114633413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:53.115413 containerd[1595]: time="2025-09-16T04:53:53.115383482Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.587358402s" Sep 16 04:53:53.115511 containerd[1595]: time="2025-09-16T04:53:53.115493834Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 16 04:53:53.116120 containerd[1595]: time="2025-09-16T04:53:53.116088374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:53:53.603006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260076086.mount: Deactivated successfully. Sep 16 04:53:54.361962 containerd[1595]: time="2025-09-16T04:53:54.361888649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:54.363093 containerd[1595]: time="2025-09-16T04:53:54.362947233Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 16 04:53:54.363938 containerd[1595]: time="2025-09-16T04:53:54.363907756Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:54.366345 containerd[1595]: time="2025-09-16T04:53:54.366319224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:54.367084 containerd[1595]: time="2025-09-16T04:53:54.367046157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.250918057s" Sep 16 04:53:54.367125 containerd[1595]: time="2025-09-16T04:53:54.367085306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:53:54.367583 containerd[1595]: time="2025-09-16T04:53:54.367541625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:53:54.804974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407181469.mount: Deactivated successfully. Sep 16 04:53:54.813262 containerd[1595]: time="2025-09-16T04:53:54.813141295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:54.814552 containerd[1595]: time="2025-09-16T04:53:54.814183535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 16 04:53:54.815679 containerd[1595]: time="2025-09-16T04:53:54.815647981Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:54.818631 containerd[1595]: time="2025-09-16T04:53:54.818595976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:54.818976 containerd[1595]: time="2025-09-16T04:53:54.818943352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.376332ms" Sep 16 04:53:54.818976 containerd[1595]: time="2025-09-16T04:53:54.818970790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:53:54.820252 containerd[1595]: time="2025-09-16T04:53:54.819424196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:53:55.312453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539207258.mount: Deactivated successfully. Sep 16 04:53:56.916914 containerd[1595]: time="2025-09-16T04:53:56.916831780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.918501 containerd[1595]: time="2025-09-16T04:53:56.918439684Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Sep 16 04:53:56.919895 containerd[1595]: time="2025-09-16T04:53:56.919818818Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.924084 containerd[1595]: time="2025-09-16T04:53:56.924016420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.925754 containerd[1595]: time="2025-09-16T04:53:56.925520339Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.106061995s" Sep 16 04:53:56.925754 containerd[1595]: time="2025-09-16T04:53:56.925575605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 16 04:53:58.822145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:53:58.827423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:59.022880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:59.033581 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:59.101531 kubelet[2278]: E0916 04:53:59.101397 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:59.104881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:59.105222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:59.106019 systemd[1]: kubelet.service: Consumed 181ms CPU time, 109.4M memory peak. Sep 16 04:53:59.866954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:59.867382 systemd[1]: kubelet.service: Consumed 181ms CPU time, 109.4M memory peak. Sep 16 04:53:59.869259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:59.902537 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-7.scope)... Sep 16 04:53:59.902552 systemd[1]: Reloading... Sep 16 04:53:59.977216 zram_generator::config[2337]: No configuration found. Sep 16 04:54:00.121121 systemd[1]: Reloading finished in 218 ms. Sep 16 04:54:00.174557 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:54:00.174649 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:54:00.174923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:00.174971 systemd[1]: kubelet.service: Consumed 72ms CPU time, 98.4M memory peak. Sep 16 04:54:00.176586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:00.317165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:00.325384 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:54:00.362799 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:00.362799 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:54:00.362799 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:00.362799 kubelet[2390]: I0916 04:54:00.362376 2390 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:54:00.673839 kubelet[2390]: I0916 04:54:00.673793 2390 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:54:00.673839 kubelet[2390]: I0916 04:54:00.673813 2390 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:54:00.674030 kubelet[2390]: I0916 04:54:00.673993 2390 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:54:00.706127 kubelet[2390]: E0916 04:54:00.706083 2390 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.203.193:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:00.707826 kubelet[2390]: I0916 04:54:00.707696 2390 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:54:00.717376 kubelet[2390]: I0916 04:54:00.717365 2390 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:54:00.720782 kubelet[2390]: I0916 04:54:00.720769 2390 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:54:00.723748 kubelet[2390]: I0916 04:54:00.723708 2390 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:54:00.723883 kubelet[2390]: I0916 04:54:00.723737 2390 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-26104e5955","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:54:00.725140 kubelet[2390]: I0916 04:54:00.725113 2390 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:54:00.725140 kubelet[2390]: I0916 04:54:00.725131 2390 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:54:00.726016 kubelet[2390]: I0916 04:54:00.725990 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:00.729838 kubelet[2390]: I0916 04:54:00.729785 2390 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:54:00.729838 kubelet[2390]: I0916 04:54:00.729810 2390 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:54:00.729838 kubelet[2390]: I0916 04:54:00.729831 2390 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:54:00.729838 kubelet[2390]: I0916 04:54:00.729842 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:54:00.735417 kubelet[2390]: W0916 04:54:00.735222 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.203.193:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:00.735417 kubelet[2390]: E0916 04:54:00.735265 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.203.193:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:00.735417 kubelet[2390]: W0916 04:54:00.735315 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.203.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-26104e5955&limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:00.735417 kubelet[2390]: E0916 04:54:00.735338 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.203.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-26104e5955&limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:00.736667 kubelet[2390]: I0916 04:54:00.736337 2390 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:54:00.739232 kubelet[2390]: I0916 04:54:00.739203 2390 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:54:00.740121 kubelet[2390]: W0916 04:54:00.740106 2390 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:54:00.740932 kubelet[2390]: I0916 04:54:00.740732 2390 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:54:00.740932 kubelet[2390]: I0916 04:54:00.740756 2390 server.go:1287] "Started kubelet" Sep 16 04:54:00.747822 kubelet[2390]: I0916 04:54:00.746931 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:54:00.751892 kubelet[2390]: I0916 04:54:00.751848 2390 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:54:00.752513 kubelet[2390]: I0916 04:54:00.752475 2390 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:54:00.755573 kubelet[2390]: E0916 04:54:00.752282 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.203.193:6443/api/v1/namespaces/default/events\": dial tcp 37.27.203.193:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-n-26104e5955.1865aa44800ae063 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-n-26104e5955,UID:ci-4459-0-0-n-26104e5955,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-n-26104e5955,},FirstTimestamp:2025-09-16 04:54:00.740741219 +0000 UTC m=+0.413260723,LastTimestamp:2025-09-16 04:54:00.740741219 +0000 UTC m=+0.413260723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-n-26104e5955,}" Sep 16 04:54:00.757971 kubelet[2390]: I0916 04:54:00.757839 2390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:54:00.759739 kubelet[2390]: I0916 04:54:00.758161 2390 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:54:00.759739 kubelet[2390]: I0916 04:54:00.758328 2390 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:54:00.759739 kubelet[2390]: E0916 04:54:00.758699 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-26104e5955\" not found" Sep 16 04:54:00.759739 kubelet[2390]: I0916 04:54:00.759597 2390 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:54:00.761144 kubelet[2390]: E0916 04:54:00.761112 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.203.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-26104e5955?timeout=10s\": dial tcp 37.27.203.193:6443: connect: connection refused" interval="200ms" Sep 16 04:54:00.761335 kubelet[2390]: I0916 04:54:00.761312 2390 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:54:00.761335 kubelet[2390]: I0916 04:54:00.761338 2390 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:54:00.761546 kubelet[2390]: W0916 04:54:00.761508 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.203.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:00.761546 kubelet[2390]: E0916 04:54:00.761540 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.203.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:00.764256 kubelet[2390]: I0916 04:54:00.763947 2390 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:54:00.764256 kubelet[2390]: I0916 04:54:00.763990 2390 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:54:00.767924 kubelet[2390]: I0916 04:54:00.767761 2390 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:54:00.770984 kubelet[2390]: E0916 04:54:00.770976 2390 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:54:00.781982 kubelet[2390]: I0916 04:54:00.781968 2390 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:54:00.783040 kubelet[2390]: I0916 04:54:00.782861 2390 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:54:00.783040 kubelet[2390]: I0916 04:54:00.782879 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:00.783230 kubelet[2390]: I0916 04:54:00.783216 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:54:00.784666 kubelet[2390]: I0916 04:54:00.784655 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:54:00.784737 kubelet[2390]: I0916 04:54:00.784731 2390 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:54:00.784828 kubelet[2390]: I0916 04:54:00.784820 2390 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:54:00.784970 kubelet[2390]: I0916 04:54:00.784894 2390 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:54:00.784970 kubelet[2390]: E0916 04:54:00.784929 2390 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:54:00.785706 kubelet[2390]: I0916 04:54:00.785698 2390 policy_none.go:49] "None policy: Start" Sep 16 04:54:00.787129 kubelet[2390]: I0916 04:54:00.786993 2390 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:54:00.787129 kubelet[2390]: I0916 04:54:00.787006 2390 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:54:00.787293 kubelet[2390]: W0916 04:54:00.787268 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.203.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:00.787379 kubelet[2390]: E0916 04:54:00.787367 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.203.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:00.795325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:54:00.807815 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:54:00.810359 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:54:00.819059 kubelet[2390]: I0916 04:54:00.818988 2390 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:54:00.819498 kubelet[2390]: I0916 04:54:00.819453 2390 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:54:00.819498 kubelet[2390]: I0916 04:54:00.819463 2390 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:54:00.820629 kubelet[2390]: I0916 04:54:00.820621 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:54:00.821539 kubelet[2390]: E0916 04:54:00.821378 2390 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:54:00.821539 kubelet[2390]: E0916 04:54:00.821487 2390 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-n-26104e5955\" not found" Sep 16 04:54:00.900709 systemd[1]: Created slice kubepods-burstable-pod3291b710c183557f2edc0fc27818ec15.slice - libcontainer container kubepods-burstable-pod3291b710c183557f2edc0fc27818ec15.slice. Sep 16 04:54:00.917963 kubelet[2390]: E0916 04:54:00.917904 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.923138 kubelet[2390]: I0916 04:54:00.922249 2390 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.924925 kubelet[2390]: E0916 04:54:00.924261 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.203.193:6443/api/v1/nodes\": dial tcp 37.27.203.193:6443: connect: connection refused" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.924322 systemd[1]: Created slice kubepods-burstable-pod1d03d5e41a407ba53ad179d7390ebf0c.slice - libcontainer container kubepods-burstable-pod1d03d5e41a407ba53ad179d7390ebf0c.slice. Sep 16 04:54:00.928349 kubelet[2390]: E0916 04:54:00.928289 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.932045 systemd[1]: Created slice kubepods-burstable-pod2b729cc00af78b2e2bdfefb65979823a.slice - libcontainer container kubepods-burstable-pod2b729cc00af78b2e2bdfefb65979823a.slice. Sep 16 04:54:00.935026 kubelet[2390]: E0916 04:54:00.934969 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.961744 kubelet[2390]: E0916 04:54:00.961690 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.203.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-26104e5955?timeout=10s\": dial tcp 37.27.203.193:6443: connect: connection refused" interval="400ms" Sep 16 04:54:00.963076 kubelet[2390]: I0916 04:54:00.962910 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963076 kubelet[2390]: I0916 04:54:00.962953 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963076 kubelet[2390]: I0916 04:54:00.962977 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963076 kubelet[2390]: I0916 04:54:00.962998 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3291b710c183557f2edc0fc27818ec15-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-26104e5955\" (UID: \"3291b710c183557f2edc0fc27818ec15\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963076 kubelet[2390]: I0916 04:54:00.963018 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963340 kubelet[2390]: I0916 04:54:00.963125 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963340 kubelet[2390]: I0916 04:54:00.963250 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963340 kubelet[2390]: I0916 04:54:00.963280 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:00.963340 kubelet[2390]: I0916 04:54:00.963337 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.127332 kubelet[2390]: I0916 04:54:01.127274 2390 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.127734 kubelet[2390]: E0916 04:54:01.127694 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.203.193:6443/api/v1/nodes\": dial tcp 37.27.203.193:6443: connect: connection refused" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.220324 containerd[1595]: time="2025-09-16T04:54:01.220029786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-26104e5955,Uid:3291b710c183557f2edc0fc27818ec15,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:01.234158 containerd[1595]: time="2025-09-16T04:54:01.234064962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-26104e5955,Uid:1d03d5e41a407ba53ad179d7390ebf0c,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:01.236602 containerd[1595]: time="2025-09-16T04:54:01.236542681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-26104e5955,Uid:2b729cc00af78b2e2bdfefb65979823a,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:01.362228 kubelet[2390]: E0916 04:54:01.362163 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.203.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-26104e5955?timeout=10s\": dial tcp 37.27.203.193:6443: connect: connection refused" interval="800ms" Sep 16 04:54:01.368255 containerd[1595]: time="2025-09-16T04:54:01.368220642Z" level=info msg="connecting to shim b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00" address="unix:///run/containerd/s/929947e15e8f672652939984ba6d2e8894ae51f02c7480dea23809af43870664" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:01.370543 containerd[1595]: time="2025-09-16T04:54:01.370500102Z" level=info msg="connecting to shim d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da" address="unix:///run/containerd/s/1b2cf45462154cfc706da49f5f73e5d833ee60871ab4994cda293b7577014f4a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:01.370814 containerd[1595]: time="2025-09-16T04:54:01.370796909Z" level=info msg="connecting to shim 9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1" address="unix:///run/containerd/s/b02ece756965b22c8a1501d69ea35d1a1554e88047cc9903bd31280b39b758da" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:01.452290 systemd[1]: Started cri-containerd-9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1.scope - libcontainer container 9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1. Sep 16 04:54:01.453709 systemd[1]: Started cri-containerd-b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00.scope - libcontainer container b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00. Sep 16 04:54:01.455260 systemd[1]: Started cri-containerd-d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da.scope - libcontainer container d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da. Sep 16 04:54:01.513720 containerd[1595]: time="2025-09-16T04:54:01.513481094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-26104e5955,Uid:3291b710c183557f2edc0fc27818ec15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00\"" Sep 16 04:54:01.517412 containerd[1595]: time="2025-09-16T04:54:01.517381793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-26104e5955,Uid:1d03d5e41a407ba53ad179d7390ebf0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da\"" Sep 16 04:54:01.518965 containerd[1595]: time="2025-09-16T04:54:01.518944323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-26104e5955,Uid:2b729cc00af78b2e2bdfefb65979823a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1\"" Sep 16 04:54:01.525567 containerd[1595]: time="2025-09-16T04:54:01.525543591Z" level=info msg="CreateContainer within sandbox \"9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:54:01.526069 containerd[1595]: time="2025-09-16T04:54:01.526048530Z" level=info msg="CreateContainer within sandbox \"b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:54:01.527729 containerd[1595]: time="2025-09-16T04:54:01.527707789Z" level=info msg="CreateContainer within sandbox \"d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:54:01.529618 kubelet[2390]: I0916 04:54:01.529374 2390 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.529618 kubelet[2390]: E0916 04:54:01.529601 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.203.193:6443/api/v1/nodes\": dial tcp 37.27.203.193:6443: connect: connection refused" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.541838 containerd[1595]: time="2025-09-16T04:54:01.541665210Z" level=info msg="Container 26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:01.543516 containerd[1595]: time="2025-09-16T04:54:01.543497245Z" level=info msg="Container 376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:01.548446 containerd[1595]: time="2025-09-16T04:54:01.548002747Z" level=info msg="Container bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:01.555683 containerd[1595]: time="2025-09-16T04:54:01.555654583Z" level=info msg="CreateContainer within sandbox \"9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\"" Sep 16 04:54:01.557201 containerd[1595]: time="2025-09-16T04:54:01.556372366Z" level=info msg="StartContainer for \"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\"" Sep 16 04:54:01.557201 containerd[1595]: time="2025-09-16T04:54:01.556986189Z" level=info msg="connecting to shim 376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594" address="unix:///run/containerd/s/b02ece756965b22c8a1501d69ea35d1a1554e88047cc9903bd31280b39b758da" protocol=ttrpc version=3 Sep 16 04:54:01.560919 containerd[1595]: time="2025-09-16T04:54:01.560884668Z" level=info msg="CreateContainer within sandbox \"d9fd4c3333d702de6f5732dfcf116ab19735e1ab7394685f6fdf10051bc4d9da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284\"" Sep 16 04:54:01.561403 containerd[1595]: time="2025-09-16T04:54:01.561386765Z" level=info msg="StartContainer for \"26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284\"" Sep 16 04:54:01.561999 containerd[1595]: time="2025-09-16T04:54:01.561955729Z" level=info msg="connecting to shim 26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284" address="unix:///run/containerd/s/1b2cf45462154cfc706da49f5f73e5d833ee60871ab4994cda293b7577014f4a" protocol=ttrpc version=3 Sep 16 04:54:01.562363 containerd[1595]: time="2025-09-16T04:54:01.562328693Z" level=info msg="CreateContainer within sandbox \"b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\"" Sep 16 04:54:01.562976 containerd[1595]: time="2025-09-16T04:54:01.562921331Z" level=info msg="StartContainer for \"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\"" Sep 16 04:54:01.563575 containerd[1595]: time="2025-09-16T04:54:01.563559290Z" level=info msg="connecting to shim bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e" address="unix:///run/containerd/s/929947e15e8f672652939984ba6d2e8894ae51f02c7480dea23809af43870664" protocol=ttrpc version=3 Sep 16 04:54:01.573304 systemd[1]: Started cri-containerd-376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594.scope - libcontainer container 376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594. Sep 16 04:54:01.577455 systemd[1]: Started cri-containerd-26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284.scope - libcontainer container 26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284. Sep 16 04:54:01.581380 systemd[1]: Started cri-containerd-bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e.scope - libcontainer container bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e. Sep 16 04:54:01.624503 kubelet[2390]: W0916 04:54:01.624407 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.203.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:01.624619 kubelet[2390]: E0916 04:54:01.624509 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.203.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:01.639014 containerd[1595]: time="2025-09-16T04:54:01.638977581Z" level=info msg="StartContainer for \"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\" returns successfully" Sep 16 04:54:01.640533 containerd[1595]: time="2025-09-16T04:54:01.640514215Z" level=info msg="StartContainer for \"26807975d31d72569976878195b7c3ba02df9a7626882e1e0c05ef2e95a0c284\" returns successfully" Sep 16 04:54:01.642774 containerd[1595]: time="2025-09-16T04:54:01.642747098Z" level=info msg="StartContainer for \"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\" returns successfully" Sep 16 04:54:01.795171 kubelet[2390]: E0916 04:54:01.795097 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.796033 kubelet[2390]: E0916 04:54:01.796013 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:01.796125 kubelet[2390]: W0916 04:54:01.796093 2390 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.203.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.203.193:6443: connect: connection refused Sep 16 04:54:01.796144 kubelet[2390]: E0916 04:54:01.796135 2390 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.203.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.203.193:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:01.797820 kubelet[2390]: E0916 04:54:01.797807 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:02.333381 kubelet[2390]: I0916 04:54:02.333339 2390 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:02.801075 kubelet[2390]: E0916 04:54:02.800995 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:02.801696 kubelet[2390]: E0916 04:54:02.801676 2390 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.022837 kubelet[2390]: E0916 04:54:03.022762 2390 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-n-26104e5955\" not found" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.085727 kubelet[2390]: I0916 04:54:03.085686 2390 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.158993 kubelet[2390]: I0916 04:54:03.158950 2390 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.167122 kubelet[2390]: E0916 04:54:03.167082 2390 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-n-26104e5955\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.167296 kubelet[2390]: I0916 04:54:03.167263 2390 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.168995 kubelet[2390]: E0916 04:54:03.168452 2390 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-n-26104e5955\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.168995 kubelet[2390]: I0916 04:54:03.168488 2390 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.169681 kubelet[2390]: E0916 04:54:03.169655 2390 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:03.734724 kubelet[2390]: I0916 04:54:03.734627 2390 apiserver.go:52] "Watching apiserver" Sep 16 04:54:03.761787 kubelet[2390]: I0916 04:54:03.761727 2390 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:54:04.236816 kubelet[2390]: I0916 04:54:04.236617 2390 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:05.268361 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-7.scope)... Sep 16 04:54:05.268375 systemd[1]: Reloading... Sep 16 04:54:05.364228 zram_generator::config[2705]: No configuration found. Sep 16 04:54:05.540242 systemd[1]: Reloading finished in 271 ms. Sep 16 04:54:05.578394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:05.597234 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:54:05.597451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:05.597503 systemd[1]: kubelet.service: Consumed 741ms CPU time, 128.4M memory peak. Sep 16 04:54:05.599071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:05.717011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:05.726387 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:54:05.773259 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:05.773528 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:54:05.773552 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:05.773667 kubelet[2757]: I0916 04:54:05.773646 2757 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:54:05.781677 kubelet[2757]: I0916 04:54:05.781657 2757 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:54:05.782398 kubelet[2757]: I0916 04:54:05.781763 2757 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:54:05.782398 kubelet[2757]: I0916 04:54:05.781929 2757 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:54:05.783876 kubelet[2757]: I0916 04:54:05.783863 2757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:54:05.785412 kubelet[2757]: I0916 04:54:05.785398 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:54:05.788364 kubelet[2757]: I0916 04:54:05.788353 2757 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:54:05.790236 kubelet[2757]: I0916 04:54:05.790225 2757 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:54:05.790461 kubelet[2757]: I0916 04:54:05.790409 2757 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:54:05.790608 kubelet[2757]: I0916 04:54:05.790498 2757 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-26104e5955","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:54:05.790708 kubelet[2757]: I0916 04:54:05.790702 2757 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:54:05.790738 kubelet[2757]: I0916 04:54:05.790735 2757 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:54:05.790793 kubelet[2757]: I0916 04:54:05.790790 2757 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:05.790915 kubelet[2757]: I0916 04:54:05.790910 2757 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:54:05.791313 kubelet[2757]: I0916 04:54:05.791289 2757 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:54:05.791376 kubelet[2757]: I0916 04:54:05.791371 2757 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:54:05.791420 kubelet[2757]: I0916 04:54:05.791416 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:54:05.793117 kubelet[2757]: I0916 04:54:05.793108 2757 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:54:05.793427 kubelet[2757]: I0916 04:54:05.793390 2757 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:54:05.793806 kubelet[2757]: I0916 04:54:05.793786 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:54:05.793872 kubelet[2757]: I0916 04:54:05.793867 2757 server.go:1287] "Started kubelet" Sep 16 04:54:05.809065 kubelet[2757]: I0916 04:54:05.809005 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:54:05.816905 kubelet[2757]: E0916 04:54:05.816891 2757 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:54:05.816984 kubelet[2757]: I0916 04:54:05.810267 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:54:05.817133 kubelet[2757]: I0916 04:54:05.817124 2757 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:54:05.818140 kubelet[2757]: I0916 04:54:05.818130 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:54:05.819247 kubelet[2757]: I0916 04:54:05.810237 2757 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:54:05.819876 kubelet[2757]: I0916 04:54:05.819866 2757 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:54:05.821862 kubelet[2757]: I0916 04:54:05.821851 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:54:05.822075 kubelet[2757]: I0916 04:54:05.822042 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:54:05.822207 kubelet[2757]: I0916 04:54:05.822200 2757 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:54:05.824211 kubelet[2757]: I0916 04:54:05.824152 2757 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:54:05.824354 kubelet[2757]: I0916 04:54:05.824323 2757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:54:05.826982 kubelet[2757]: I0916 04:54:05.826943 2757 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:54:05.833918 kubelet[2757]: I0916 04:54:05.833699 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:54:05.837247 kubelet[2757]: I0916 04:54:05.837049 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:54:05.837247 kubelet[2757]: I0916 04:54:05.837116 2757 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:54:05.837247 kubelet[2757]: I0916 04:54:05.837136 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:54:05.837247 kubelet[2757]: I0916 04:54:05.837143 2757 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:54:05.839062 kubelet[2757]: E0916 04:54:05.839025 2757 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:54:05.887209 kubelet[2757]: I0916 04:54:05.887122 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:54:05.887209 kubelet[2757]: I0916 04:54:05.887142 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:54:05.887330 kubelet[2757]: I0916 04:54:05.887252 2757 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887417 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887434 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887454 2757 policy_none.go:49] "None policy: Start" Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887475 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887486 2757 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:54:05.887681 kubelet[2757]: I0916 04:54:05.887582 2757 state_mem.go:75] "Updated machine memory state" Sep 16 04:54:05.893843 kubelet[2757]: I0916 04:54:05.893815 2757 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:54:05.894013 kubelet[2757]: I0916 04:54:05.893993 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:54:05.894033 kubelet[2757]: I0916 04:54:05.894012 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:54:05.899813 kubelet[2757]: E0916 04:54:05.898617 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:54:05.901731 kubelet[2757]: I0916 04:54:05.901576 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:54:05.944942 kubelet[2757]: I0916 04:54:05.944696 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" Sep 16 04:54:05.945193 kubelet[2757]: I0916 04:54:05.944611 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:05.945426 kubelet[2757]: I0916 04:54:05.945399 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:05.951199 kubelet[2757]: E0916 04:54:05.951141 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" already exists" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.008063 kubelet[2757]: I0916 04:54:06.007983 2757 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.020541 kubelet[2757]: I0916 04:54:06.020471 2757 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.020694 kubelet[2757]: I0916 04:54:06.020596 2757 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.123434 kubelet[2757]: I0916 04:54:06.123355 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.123434 kubelet[2757]: I0916 04:54:06.123461 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.123896 kubelet[2757]: I0916 04:54:06.123530 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.123998 kubelet[2757]: I0916 04:54:06.123959 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3291b710c183557f2edc0fc27818ec15-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-26104e5955\" (UID: \"3291b710c183557f2edc0fc27818ec15\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.124081 kubelet[2757]: I0916 04:54:06.124008 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.124122 kubelet[2757]: I0916 04:54:06.124090 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.124430 kubelet[2757]: I0916 04:54:06.124175 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b729cc00af78b2e2bdfefb65979823a-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-26104e5955\" (UID: \"2b729cc00af78b2e2bdfefb65979823a\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.124430 kubelet[2757]: I0916 04:54:06.124280 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.124430 kubelet[2757]: I0916 04:54:06.124355 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d03d5e41a407ba53ad179d7390ebf0c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-26104e5955\" (UID: \"1d03d5e41a407ba53ad179d7390ebf0c\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" Sep 16 04:54:06.282173 sudo[2792]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:54:06.282447 sudo[2792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:54:06.663102 sudo[2792]: pam_unix(sudo:session): session closed for user root Sep 16 04:54:06.793334 kubelet[2757]: I0916 04:54:06.793271 2757 apiserver.go:52] "Watching apiserver" Sep 16 04:54:06.822456 kubelet[2757]: I0916 04:54:06.822379 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:54:06.906955 kubelet[2757]: I0916 04:54:06.906821 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-n-26104e5955" podStartSLOduration=1.906787362 podStartE2EDuration="1.906787362s" podCreationTimestamp="2025-09-16 04:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:06.905875452 +0000 UTC m=+1.175038358" watchObservedRunningTime="2025-09-16 04:54:06.906787362 +0000 UTC m=+1.175950268" Sep 16 04:54:06.933285 kubelet[2757]: I0916 04:54:06.932032 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-n-26104e5955" podStartSLOduration=1.9320084720000001 podStartE2EDuration="1.932008472s" podCreationTimestamp="2025-09-16 04:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:06.9180457 +0000 UTC m=+1.187208656" watchObservedRunningTime="2025-09-16 04:54:06.932008472 +0000 UTC m=+1.201171428" Sep 16 04:54:06.946549 kubelet[2757]: I0916 04:54:06.946470 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-26104e5955" podStartSLOduration=2.9462691149999998 podStartE2EDuration="2.946269115s" podCreationTimestamp="2025-09-16 04:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:06.932957005 +0000 UTC m=+1.202119952" watchObservedRunningTime="2025-09-16 04:54:06.946269115 +0000 UTC m=+1.215432061" Sep 16 04:54:08.358414 sudo[1824]: pam_unix(sudo:session): session closed for user root Sep 16 04:54:08.516387 sshd[1823]: Connection closed by 139.178.89.65 port 43884 Sep 16 04:54:08.517807 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:08.523310 systemd[1]: sshd@6-37.27.203.193:22-139.178.89.65:43884.service: Deactivated successfully. Sep 16 04:54:08.526490 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:54:08.526758 systemd[1]: session-7.scope: Consumed 4.789s CPU time, 217.9M memory peak. Sep 16 04:54:08.528545 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:54:08.530581 systemd-logind[1567]: Removed session 7. Sep 16 04:54:09.991612 update_engine[1568]: I20250916 04:54:09.991525 1568 update_attempter.cc:509] Updating boot flags... Sep 16 04:54:10.538263 kubelet[2757]: I0916 04:54:10.537686 2757 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:54:10.538840 containerd[1595]: time="2025-09-16T04:54:10.538154412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:54:10.539435 kubelet[2757]: I0916 04:54:10.539383 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:54:11.098757 systemd[1]: Created slice kubepods-besteffort-pod6c9537bf_6128_4c5d_8c27_66e3c549e676.slice - libcontainer container kubepods-besteffort-pod6c9537bf_6128_4c5d_8c27_66e3c549e676.slice. Sep 16 04:54:11.136977 systemd[1]: Created slice kubepods-burstable-podbd1b4b60_c763_4a97_b587_14cd802104d8.slice - libcontainer container kubepods-burstable-podbd1b4b60_c763_4a97_b587_14cd802104d8.slice. Sep 16 04:54:11.138590 kubelet[2757]: W0916 04:54:11.138176 2757 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459-0-0-n-26104e5955" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object Sep 16 04:54:11.138661 kubelet[2757]: E0916 04:54:11.138627 2757 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459-0-0-n-26104e5955\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object" logger="UnhandledError" Sep 16 04:54:11.145915 kubelet[2757]: W0916 04:54:11.145360 2757 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459-0-0-n-26104e5955" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object Sep 16 04:54:11.145915 kubelet[2757]: E0916 04:54:11.145391 2757 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459-0-0-n-26104e5955\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object" logger="UnhandledError" Sep 16 04:54:11.146175 kubelet[2757]: W0916 04:54:11.146155 2757 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459-0-0-n-26104e5955" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object Sep 16 04:54:11.146313 kubelet[2757]: E0916 04:54:11.146295 2757 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459-0-0-n-26104e5955\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-26104e5955' and this object" logger="UnhandledError" Sep 16 04:54:11.158529 kubelet[2757]: I0916 04:54:11.158496 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c9537bf-6128-4c5d-8c27-66e3c549e676-kube-proxy\") pod \"kube-proxy-c46jl\" (UID: \"6c9537bf-6128-4c5d-8c27-66e3c549e676\") " pod="kube-system/kube-proxy-c46jl" Sep 16 04:54:11.158586 kubelet[2757]: I0916 04:54:11.158548 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c9537bf-6128-4c5d-8c27-66e3c549e676-xtables-lock\") pod \"kube-proxy-c46jl\" (UID: \"6c9537bf-6128-4c5d-8c27-66e3c549e676\") " pod="kube-system/kube-proxy-c46jl" Sep 16 04:54:11.159350 kubelet[2757]: I0916 04:54:11.159276 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9537bf-6128-4c5d-8c27-66e3c549e676-lib-modules\") pod \"kube-proxy-c46jl\" (UID: \"6c9537bf-6128-4c5d-8c27-66e3c549e676\") " pod="kube-system/kube-proxy-c46jl" Sep 16 04:54:11.159381 kubelet[2757]: I0916 04:54:11.159365 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-run\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159425 kubelet[2757]: I0916 04:54:11.159390 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-net\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159425 kubelet[2757]: I0916 04:54:11.159418 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-config-path\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159452 kubelet[2757]: I0916 04:54:11.159438 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-kernel\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159490 kubelet[2757]: I0916 04:54:11.159461 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6sh\" (UniqueName: \"kubernetes.io/projected/6c9537bf-6128-4c5d-8c27-66e3c549e676-kube-api-access-hr6sh\") pod \"kube-proxy-c46jl\" (UID: \"6c9537bf-6128-4c5d-8c27-66e3c549e676\") " pod="kube-system/kube-proxy-c46jl" Sep 16 04:54:11.159508 kubelet[2757]: I0916 04:54:11.159488 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1b4b60-c763-4a97-b587-14cd802104d8-clustermesh-secrets\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159522 kubelet[2757]: I0916 04:54:11.159508 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-etc-cni-netd\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159527 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159558 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8sc\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159581 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-bpf-maps\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159600 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-cgroup\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159611 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-lib-modules\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159673 kubelet[2757]: I0916 04:54:11.159624 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cni-path\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159767 kubelet[2757]: I0916 04:54:11.159634 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-xtables-lock\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.159767 kubelet[2757]: I0916 04:54:11.159651 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-hostproc\") pod \"cilium-6rrzd\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " pod="kube-system/cilium-6rrzd" Sep 16 04:54:11.269929 kubelet[2757]: E0916 04:54:11.269893 2757 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 04:54:11.270370 kubelet[2757]: E0916 04:54:11.270090 2757 projected.go:194] Error preparing data for projected volume kube-api-access-hr6sh for pod kube-system/kube-proxy-c46jl: configmap "kube-root-ca.crt" not found Sep 16 04:54:11.270370 kubelet[2757]: E0916 04:54:11.270149 2757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c9537bf-6128-4c5d-8c27-66e3c549e676-kube-api-access-hr6sh podName:6c9537bf-6128-4c5d-8c27-66e3c549e676 nodeName:}" failed. No retries permitted until 2025-09-16 04:54:11.770130017 +0000 UTC m=+6.039292953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hr6sh" (UniqueName: "kubernetes.io/projected/6c9537bf-6128-4c5d-8c27-66e3c549e676-kube-api-access-hr6sh") pod "kube-proxy-c46jl" (UID: "6c9537bf-6128-4c5d-8c27-66e3c549e676") : configmap "kube-root-ca.crt" not found Sep 16 04:54:11.277076 kubelet[2757]: E0916 04:54:11.277014 2757 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 04:54:11.277076 kubelet[2757]: E0916 04:54:11.277039 2757 projected.go:194] Error preparing data for projected volume kube-api-access-jq8sc for pod kube-system/cilium-6rrzd: configmap "kube-root-ca.crt" not found Sep 16 04:54:11.277355 kubelet[2757]: E0916 04:54:11.277245 2757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc podName:bd1b4b60-c763-4a97-b587-14cd802104d8 nodeName:}" failed. No retries permitted until 2025-09-16 04:54:11.777173624 +0000 UTC m=+6.046336560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jq8sc" (UniqueName: "kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc") pod "cilium-6rrzd" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8") : configmap "kube-root-ca.crt" not found Sep 16 04:54:11.601700 systemd[1]: Created slice kubepods-besteffort-pod4f4369ee_803b_4f87_afa7_14257e03f19c.slice - libcontainer container kubepods-besteffort-pod4f4369ee_803b_4f87_afa7_14257e03f19c.slice. Sep 16 04:54:11.664254 kubelet[2757]: I0916 04:54:11.664083 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqv8\" (UniqueName: \"kubernetes.io/projected/4f4369ee-803b-4f87-afa7-14257e03f19c-kube-api-access-ngqv8\") pod \"cilium-operator-6c4d7847fc-kfhvh\" (UID: \"4f4369ee-803b-4f87-afa7-14257e03f19c\") " pod="kube-system/cilium-operator-6c4d7847fc-kfhvh" Sep 16 04:54:11.665131 kubelet[2757]: I0916 04:54:11.664668 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4369ee-803b-4f87-afa7-14257e03f19c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kfhvh\" (UID: \"4f4369ee-803b-4f87-afa7-14257e03f19c\") " pod="kube-system/cilium-operator-6c4d7847fc-kfhvh" Sep 16 04:54:12.010072 containerd[1595]: time="2025-09-16T04:54:12.009945846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c46jl,Uid:6c9537bf-6128-4c5d-8c27-66e3c549e676,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:12.025919 containerd[1595]: time="2025-09-16T04:54:12.025888843Z" level=info msg="connecting to shim 53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021" address="unix:///run/containerd/s/1dfb64a672ea648464a0853341f8ca1609ab1a2880097c52c5e6ecadabc1c3b2" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:12.059536 systemd[1]: Started cri-containerd-53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021.scope - libcontainer container 53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021. Sep 16 04:54:12.094949 containerd[1595]: time="2025-09-16T04:54:12.094862391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c46jl,Uid:6c9537bf-6128-4c5d-8c27-66e3c549e676,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021\"" Sep 16 04:54:12.100380 containerd[1595]: time="2025-09-16T04:54:12.100288621Z" level=info msg="CreateContainer within sandbox \"53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:54:12.115276 containerd[1595]: time="2025-09-16T04:54:12.115224026Z" level=info msg="Container 35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:12.126487 containerd[1595]: time="2025-09-16T04:54:12.126427216Z" level=info msg="CreateContainer within sandbox \"53c6d71fc9dc41be7ad79aba36993b283e93d4236192d0b949a33e196a4cf021\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8\"" Sep 16 04:54:12.127232 containerd[1595]: time="2025-09-16T04:54:12.127158326Z" level=info msg="StartContainer for \"35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8\"" Sep 16 04:54:12.129454 containerd[1595]: time="2025-09-16T04:54:12.129350182Z" level=info msg="connecting to shim 35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8" address="unix:///run/containerd/s/1dfb64a672ea648464a0853341f8ca1609ab1a2880097c52c5e6ecadabc1c3b2" protocol=ttrpc version=3 Sep 16 04:54:12.163459 systemd[1]: Started cri-containerd-35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8.scope - libcontainer container 35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8. Sep 16 04:54:12.209969 containerd[1595]: time="2025-09-16T04:54:12.209106329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfhvh,Uid:4f4369ee-803b-4f87-afa7-14257e03f19c,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:12.224877 containerd[1595]: time="2025-09-16T04:54:12.224840321Z" level=info msg="StartContainer for \"35173f78f5224a6926455add6407922f5f098b18df2081121db7295e8359e9f8\" returns successfully" Sep 16 04:54:12.249659 containerd[1595]: time="2025-09-16T04:54:12.249480401Z" level=info msg="connecting to shim 82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054" address="unix:///run/containerd/s/4ee6a397b1f0b434bd342f930064c3f711f125bb11c048cfc3043a9d98e6877a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:12.261387 kubelet[2757]: E0916 04:54:12.261288 2757 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 16 04:54:12.261595 kubelet[2757]: E0916 04:54:12.261577 2757 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6rrzd: failed to sync secret cache: timed out waiting for the condition Sep 16 04:54:12.261811 kubelet[2757]: E0916 04:54:12.261772 2757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls podName:bd1b4b60-c763-4a97-b587-14cd802104d8 nodeName:}" failed. No retries permitted until 2025-09-16 04:54:12.761747371 +0000 UTC m=+7.030910317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls") pod "cilium-6rrzd" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8") : failed to sync secret cache: timed out waiting for the condition Sep 16 04:54:12.295379 systemd[1]: Started cri-containerd-82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054.scope - libcontainer container 82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054. Sep 16 04:54:12.392954 containerd[1595]: time="2025-09-16T04:54:12.392837831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfhvh,Uid:4f4369ee-803b-4f87-afa7-14257e03f19c,Namespace:kube-system,Attempt:0,} returns sandbox id \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\"" Sep 16 04:54:12.397882 containerd[1595]: time="2025-09-16T04:54:12.397804871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:54:12.798851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622696165.mount: Deactivated successfully. Sep 16 04:54:12.944903 containerd[1595]: time="2025-09-16T04:54:12.944823870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rrzd,Uid:bd1b4b60-c763-4a97-b587-14cd802104d8,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:12.964274 containerd[1595]: time="2025-09-16T04:54:12.964227783Z" level=info msg="connecting to shim b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:13.003489 systemd[1]: Started cri-containerd-b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d.scope - libcontainer container b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d. Sep 16 04:54:13.042794 containerd[1595]: time="2025-09-16T04:54:13.042738230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rrzd,Uid:bd1b4b60-c763-4a97-b587-14cd802104d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\"" Sep 16 04:54:15.236523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263357326.mount: Deactivated successfully. Sep 16 04:54:15.578474 kubelet[2757]: I0916 04:54:15.578355 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c46jl" podStartSLOduration=4.578311986 podStartE2EDuration="4.578311986s" podCreationTimestamp="2025-09-16 04:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:12.92751165 +0000 UTC m=+7.196674586" watchObservedRunningTime="2025-09-16 04:54:15.578311986 +0000 UTC m=+9.847474932" Sep 16 04:54:16.577122 containerd[1595]: time="2025-09-16T04:54:16.577073915Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:16.578249 containerd[1595]: time="2025-09-16T04:54:16.578094319Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 04:54:16.579166 containerd[1595]: time="2025-09-16T04:54:16.579137235Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:16.580309 containerd[1595]: time="2025-09-16T04:54:16.580275730Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.182203798s" Sep 16 04:54:16.580392 containerd[1595]: time="2025-09-16T04:54:16.580378582Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 04:54:16.582710 containerd[1595]: time="2025-09-16T04:54:16.582632807Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:54:16.585972 containerd[1595]: time="2025-09-16T04:54:16.585937293Z" level=info msg="CreateContainer within sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:54:16.601249 containerd[1595]: time="2025-09-16T04:54:16.598558839Z" level=info msg="Container e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:16.604512 containerd[1595]: time="2025-09-16T04:54:16.604421574Z" level=info msg="CreateContainer within sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\"" Sep 16 04:54:16.605296 containerd[1595]: time="2025-09-16T04:54:16.605200432Z" level=info msg="StartContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\"" Sep 16 04:54:16.608696 containerd[1595]: time="2025-09-16T04:54:16.608663254Z" level=info msg="connecting to shim e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb" address="unix:///run/containerd/s/4ee6a397b1f0b434bd342f930064c3f711f125bb11c048cfc3043a9d98e6877a" protocol=ttrpc version=3 Sep 16 04:54:16.633415 systemd[1]: Started cri-containerd-e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb.scope - libcontainer container e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb. Sep 16 04:54:16.654667 containerd[1595]: time="2025-09-16T04:54:16.654615355Z" level=info msg="StartContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" returns successfully" Sep 16 04:54:19.538830 kubelet[2757]: I0916 04:54:19.538750 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kfhvh" podStartSLOduration=4.352420993 podStartE2EDuration="8.538729265s" podCreationTimestamp="2025-09-16 04:54:11 +0000 UTC" firstStartedPulling="2025-09-16 04:54:12.396113258 +0000 UTC m=+6.665276204" lastFinishedPulling="2025-09-16 04:54:16.58242157 +0000 UTC m=+10.851584476" observedRunningTime="2025-09-16 04:54:16.930588872 +0000 UTC m=+11.199751778" watchObservedRunningTime="2025-09-16 04:54:19.538729265 +0000 UTC m=+13.807892212" Sep 16 04:54:22.297329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753889468.mount: Deactivated successfully. Sep 16 04:54:24.647555 containerd[1595]: time="2025-09-16T04:54:24.647516390Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:24.648246 containerd[1595]: time="2025-09-16T04:54:24.648222479Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 04:54:24.649034 containerd[1595]: time="2025-09-16T04:54:24.649005695Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:24.649689 containerd[1595]: time="2025-09-16T04:54:24.649667264Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.067010513s" Sep 16 04:54:24.649731 containerd[1595]: time="2025-09-16T04:54:24.649688974Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 04:54:24.651664 containerd[1595]: time="2025-09-16T04:54:24.651642633Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:54:24.664156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540587483.mount: Deactivated successfully. Sep 16 04:54:24.665274 containerd[1595]: time="2025-09-16T04:54:24.665258787Z" level=info msg="Container 9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:24.667381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410297121.mount: Deactivated successfully. Sep 16 04:54:24.677655 containerd[1595]: time="2025-09-16T04:54:24.677628381Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\"" Sep 16 04:54:24.678041 containerd[1595]: time="2025-09-16T04:54:24.678024322Z" level=info msg="StartContainer for \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\"" Sep 16 04:54:24.678502 containerd[1595]: time="2025-09-16T04:54:24.678443203Z" level=info msg="connecting to shim 9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" protocol=ttrpc version=3 Sep 16 04:54:24.693280 systemd[1]: Started cri-containerd-9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3.scope - libcontainer container 9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3. Sep 16 04:54:24.712830 containerd[1595]: time="2025-09-16T04:54:24.712807668Z" level=info msg="StartContainer for \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" returns successfully" Sep 16 04:54:24.720612 systemd[1]: cri-containerd-9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3.scope: Deactivated successfully. Sep 16 04:54:24.734404 containerd[1595]: time="2025-09-16T04:54:24.733748373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" id:\"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" pid:3230 exited_at:{seconds:1757998464 nanos:721944390}" Sep 16 04:54:24.739189 containerd[1595]: time="2025-09-16T04:54:24.739151769Z" level=info msg="received exit event container_id:\"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" id:\"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" pid:3230 exited_at:{seconds:1757998464 nanos:721944390}" Sep 16 04:54:24.933944 containerd[1595]: time="2025-09-16T04:54:24.933821387Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:54:24.947335 containerd[1595]: time="2025-09-16T04:54:24.947260916Z" level=info msg="Container b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:24.954107 containerd[1595]: time="2025-09-16T04:54:24.954064045Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\"" Sep 16 04:54:24.955803 containerd[1595]: time="2025-09-16T04:54:24.954778449Z" level=info msg="StartContainer for \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\"" Sep 16 04:54:24.957419 containerd[1595]: time="2025-09-16T04:54:24.956112940Z" level=info msg="connecting to shim b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" protocol=ttrpc version=3 Sep 16 04:54:24.996805 systemd[1]: Started cri-containerd-b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9.scope - libcontainer container b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9. Sep 16 04:54:25.058561 containerd[1595]: time="2025-09-16T04:54:25.058516908Z" level=info msg="StartContainer for \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" returns successfully" Sep 16 04:54:25.079737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:54:25.080043 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:54:25.080563 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:54:25.082874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:54:25.086578 systemd[1]: cri-containerd-b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9.scope: Deactivated successfully. Sep 16 04:54:25.089115 containerd[1595]: time="2025-09-16T04:54:25.088381377Z" level=info msg="received exit event container_id:\"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" id:\"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" pid:3275 exited_at:{seconds:1757998465 nanos:86960431}" Sep 16 04:54:25.089115 containerd[1595]: time="2025-09-16T04:54:25.088684786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" id:\"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" pid:3275 exited_at:{seconds:1757998465 nanos:86960431}" Sep 16 04:54:25.112957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:54:25.663080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3-rootfs.mount: Deactivated successfully. Sep 16 04:54:25.939372 containerd[1595]: time="2025-09-16T04:54:25.938703465Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:54:25.959563 containerd[1595]: time="2025-09-16T04:54:25.959445663Z" level=info msg="Container ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:25.964883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1768474575.mount: Deactivated successfully. Sep 16 04:54:25.972096 containerd[1595]: time="2025-09-16T04:54:25.972069446Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\"" Sep 16 04:54:25.974073 containerd[1595]: time="2025-09-16T04:54:25.974029868Z" level=info msg="StartContainer for \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\"" Sep 16 04:54:25.975551 containerd[1595]: time="2025-09-16T04:54:25.975529961Z" level=info msg="connecting to shim ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" protocol=ttrpc version=3 Sep 16 04:54:25.993323 systemd[1]: Started cri-containerd-ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177.scope - libcontainer container ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177. Sep 16 04:54:26.032109 containerd[1595]: time="2025-09-16T04:54:26.032047847Z" level=info msg="StartContainer for \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" returns successfully" Sep 16 04:54:26.036847 systemd[1]: cri-containerd-ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177.scope: Deactivated successfully. Sep 16 04:54:26.037236 systemd[1]: cri-containerd-ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177.scope: Consumed 18ms CPU time, 5.8M memory peak, 1.2M read from disk. Sep 16 04:54:26.040741 containerd[1595]: time="2025-09-16T04:54:26.040695955Z" level=info msg="received exit event container_id:\"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" id:\"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" pid:3321 exited_at:{seconds:1757998466 nanos:39989600}" Sep 16 04:54:26.041011 containerd[1595]: time="2025-09-16T04:54:26.040850591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" id:\"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" pid:3321 exited_at:{seconds:1757998466 nanos:39989600}" Sep 16 04:54:26.662973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177-rootfs.mount: Deactivated successfully. Sep 16 04:54:26.946408 containerd[1595]: time="2025-09-16T04:54:26.946232147Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:54:26.966270 containerd[1595]: time="2025-09-16T04:54:26.965357401Z" level=info msg="Container b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:26.971221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359479156.mount: Deactivated successfully. Sep 16 04:54:26.993442 containerd[1595]: time="2025-09-16T04:54:26.993394395Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\"" Sep 16 04:54:26.996920 containerd[1595]: time="2025-09-16T04:54:26.995592893Z" level=info msg="StartContainer for \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\"" Sep 16 04:54:27.008636 containerd[1595]: time="2025-09-16T04:54:27.008587325Z" level=info msg="connecting to shim b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" protocol=ttrpc version=3 Sep 16 04:54:27.039439 systemd[1]: Started cri-containerd-b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a.scope - libcontainer container b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a. Sep 16 04:54:27.077264 systemd[1]: cri-containerd-b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a.scope: Deactivated successfully. Sep 16 04:54:27.081160 containerd[1595]: time="2025-09-16T04:54:27.081118527Z" level=info msg="received exit event container_id:\"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" id:\"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" pid:3360 exited_at:{seconds:1757998467 nanos:77477379}" Sep 16 04:54:27.081918 containerd[1595]: time="2025-09-16T04:54:27.081871285Z" level=info msg="StartContainer for \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" returns successfully" Sep 16 04:54:27.096445 containerd[1595]: time="2025-09-16T04:54:27.096408013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" id:\"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" pid:3360 exited_at:{seconds:1757998467 nanos:77477379}" Sep 16 04:54:27.108591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a-rootfs.mount: Deactivated successfully. Sep 16 04:54:27.950546 containerd[1595]: time="2025-09-16T04:54:27.950463352Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:54:27.975242 containerd[1595]: time="2025-09-16T04:54:27.972287908Z" level=info msg="Container 2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:27.977496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423334370.mount: Deactivated successfully. Sep 16 04:54:27.992395 containerd[1595]: time="2025-09-16T04:54:27.992329716Z" level=info msg="CreateContainer within sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\"" Sep 16 04:54:27.993207 containerd[1595]: time="2025-09-16T04:54:27.993136036Z" level=info msg="StartContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\"" Sep 16 04:54:27.995017 containerd[1595]: time="2025-09-16T04:54:27.994975566Z" level=info msg="connecting to shim 2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf" address="unix:///run/containerd/s/4cb2821fccc4e07e1634a85b239a79eebbfb5cc6e59d2a0146162a545f403c5e" protocol=ttrpc version=3 Sep 16 04:54:28.031413 systemd[1]: Started cri-containerd-2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf.scope - libcontainer container 2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf. Sep 16 04:54:28.082738 containerd[1595]: time="2025-09-16T04:54:28.082631751Z" level=info msg="StartContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" returns successfully" Sep 16 04:54:28.184967 containerd[1595]: time="2025-09-16T04:54:28.184922705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" id:\"00e4c41236bf5d17bbfaca8a2bd9d0c565932d3c79b328fdf0d49d4232acd0ac\" pid:3432 exited_at:{seconds:1757998468 nanos:184683072}" Sep 16 04:54:28.256578 kubelet[2757]: I0916 04:54:28.255927 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:54:28.319502 systemd[1]: Created slice kubepods-burstable-podf0090d18_ac49_4d70_b26e_466b7f0cdb61.slice - libcontainer container kubepods-burstable-podf0090d18_ac49_4d70_b26e_466b7f0cdb61.slice. Sep 16 04:54:28.331827 systemd[1]: Created slice kubepods-burstable-pod86337766_5d24_4a95_ad1a_b91cec1a8902.slice - libcontainer container kubepods-burstable-pod86337766_5d24_4a95_ad1a_b91cec1a8902.slice. Sep 16 04:54:28.488683 kubelet[2757]: I0916 04:54:28.488615 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86337766-5d24-4a95-ad1a-b91cec1a8902-config-volume\") pod \"coredns-668d6bf9bc-56rgm\" (UID: \"86337766-5d24-4a95-ad1a-b91cec1a8902\") " pod="kube-system/coredns-668d6bf9bc-56rgm" Sep 16 04:54:28.488683 kubelet[2757]: I0916 04:54:28.488679 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmw4\" (UniqueName: \"kubernetes.io/projected/f0090d18-ac49-4d70-b26e-466b7f0cdb61-kube-api-access-xgmw4\") pod \"coredns-668d6bf9bc-k7xk6\" (UID: \"f0090d18-ac49-4d70-b26e-466b7f0cdb61\") " pod="kube-system/coredns-668d6bf9bc-k7xk6" Sep 16 04:54:28.488683 kubelet[2757]: I0916 04:54:28.488694 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j556g\" (UniqueName: \"kubernetes.io/projected/86337766-5d24-4a95-ad1a-b91cec1a8902-kube-api-access-j556g\") pod \"coredns-668d6bf9bc-56rgm\" (UID: \"86337766-5d24-4a95-ad1a-b91cec1a8902\") " pod="kube-system/coredns-668d6bf9bc-56rgm" Sep 16 04:54:28.488843 kubelet[2757]: I0916 04:54:28.488706 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0090d18-ac49-4d70-b26e-466b7f0cdb61-config-volume\") pod \"coredns-668d6bf9bc-k7xk6\" (UID: \"f0090d18-ac49-4d70-b26e-466b7f0cdb61\") " pod="kube-system/coredns-668d6bf9bc-k7xk6" Sep 16 04:54:28.628260 containerd[1595]: time="2025-09-16T04:54:28.628218414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7xk6,Uid:f0090d18-ac49-4d70-b26e-466b7f0cdb61,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:28.637091 containerd[1595]: time="2025-09-16T04:54:28.637047286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56rgm,Uid:86337766-5d24-4a95-ad1a-b91cec1a8902,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:28.975371 kubelet[2757]: I0916 04:54:28.974935 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6rrzd" podStartSLOduration=6.369474055 podStartE2EDuration="17.974918117s" podCreationTimestamp="2025-09-16 04:54:11 +0000 UTC" firstStartedPulling="2025-09-16 04:54:13.044826862 +0000 UTC m=+7.313989808" lastFinishedPulling="2025-09-16 04:54:24.650270964 +0000 UTC m=+18.919433870" observedRunningTime="2025-09-16 04:54:28.972493777 +0000 UTC m=+23.241656693" watchObservedRunningTime="2025-09-16 04:54:28.974918117 +0000 UTC m=+23.244081033" Sep 16 04:54:30.170427 systemd-networkd[1464]: cilium_host: Link UP Sep 16 04:54:30.171016 systemd-networkd[1464]: cilium_net: Link UP Sep 16 04:54:30.171254 systemd-networkd[1464]: cilium_net: Gained carrier Sep 16 04:54:30.171439 systemd-networkd[1464]: cilium_host: Gained carrier Sep 16 04:54:30.299443 systemd-networkd[1464]: cilium_host: Gained IPv6LL Sep 16 04:54:30.313340 systemd-networkd[1464]: cilium_vxlan: Link UP Sep 16 04:54:30.313354 systemd-networkd[1464]: cilium_vxlan: Gained carrier Sep 16 04:54:30.650224 kernel: NET: Registered PF_ALG protocol family Sep 16 04:54:30.803692 systemd-networkd[1464]: cilium_net: Gained IPv6LL Sep 16 04:54:31.360123 systemd-networkd[1464]: lxc_health: Link UP Sep 16 04:54:31.364500 systemd-networkd[1464]: lxc_health: Gained carrier Sep 16 04:54:31.685395 kernel: eth0: renamed from tmp3bc23 Sep 16 04:54:31.683351 systemd-networkd[1464]: lxc3acb46e5bf05: Link UP Sep 16 04:54:31.690684 systemd-networkd[1464]: lxc3acb46e5bf05: Gained carrier Sep 16 04:54:31.705220 kernel: eth0: renamed from tmpdd9ce Sep 16 04:54:31.703147 systemd-networkd[1464]: lxc954931d7f3c6: Link UP Sep 16 04:54:31.714575 systemd-networkd[1464]: lxc954931d7f3c6: Gained carrier Sep 16 04:54:32.147514 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Sep 16 04:54:32.467437 systemd-networkd[1464]: lxc_health: Gained IPv6LL Sep 16 04:54:32.787339 systemd-networkd[1464]: lxc3acb46e5bf05: Gained IPv6LL Sep 16 04:54:33.171378 systemd-networkd[1464]: lxc954931d7f3c6: Gained IPv6LL Sep 16 04:54:34.245572 containerd[1595]: time="2025-09-16T04:54:34.245532018Z" level=info msg="connecting to shim 3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918" address="unix:///run/containerd/s/a72626a5cf262d63145de8fe1cb6567bec444f947460fb7600e4ffb89459a6c4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:34.269370 systemd[1]: Started cri-containerd-3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918.scope - libcontainer container 3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918. Sep 16 04:54:34.319865 containerd[1595]: time="2025-09-16T04:54:34.319834132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56rgm,Uid:86337766-5d24-4a95-ad1a-b91cec1a8902,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918\"" Sep 16 04:54:34.320839 containerd[1595]: time="2025-09-16T04:54:34.320790985Z" level=info msg="connecting to shim dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1" address="unix:///run/containerd/s/ff0e792c28da561b61bdb6f72b128453fdecd10c767eb0b18c6a1491411d599f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:34.321597 containerd[1595]: time="2025-09-16T04:54:34.321577390Z" level=info msg="CreateContainer within sandbox \"3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:54:34.342133 containerd[1595]: time="2025-09-16T04:54:34.342019705Z" level=info msg="Container b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:34.343984 systemd[1]: Started cri-containerd-dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1.scope - libcontainer container dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1. Sep 16 04:54:34.347427 containerd[1595]: time="2025-09-16T04:54:34.347405495Z" level=info msg="CreateContainer within sandbox \"3bc236d764a8be3124bd578fddac5c32e37f15b1c62348cbf736d693bbc43918\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf\"" Sep 16 04:54:34.348162 containerd[1595]: time="2025-09-16T04:54:34.348142316Z" level=info msg="StartContainer for \"b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf\"" Sep 16 04:54:34.349276 containerd[1595]: time="2025-09-16T04:54:34.349252783Z" level=info msg="connecting to shim b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf" address="unix:///run/containerd/s/a72626a5cf262d63145de8fe1cb6567bec444f947460fb7600e4ffb89459a6c4" protocol=ttrpc version=3 Sep 16 04:54:34.364345 systemd[1]: Started cri-containerd-b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf.scope - libcontainer container b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf. Sep 16 04:54:34.389371 containerd[1595]: time="2025-09-16T04:54:34.389169496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7xk6,Uid:f0090d18-ac49-4d70-b26e-466b7f0cdb61,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1\"" Sep 16 04:54:34.390731 containerd[1595]: time="2025-09-16T04:54:34.390700403Z" level=info msg="StartContainer for \"b333805eacb32822f1ed263b8b8f90331689943b3451bab66b5145d2f0bbecdf\" returns successfully" Sep 16 04:54:34.392593 containerd[1595]: time="2025-09-16T04:54:34.392571909Z" level=info msg="CreateContainer within sandbox \"dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:54:34.400260 containerd[1595]: time="2025-09-16T04:54:34.400230708Z" level=info msg="Container b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:34.404758 containerd[1595]: time="2025-09-16T04:54:34.404726244Z" level=info msg="CreateContainer within sandbox \"dd9cea7fa512b26cf33f24386539ba2c8f8aae2e48b55905f261ae99f9d844e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351\"" Sep 16 04:54:34.405510 containerd[1595]: time="2025-09-16T04:54:34.405408519Z" level=info msg="StartContainer for \"b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351\"" Sep 16 04:54:34.405998 containerd[1595]: time="2025-09-16T04:54:34.405980453Z" level=info msg="connecting to shim b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351" address="unix:///run/containerd/s/ff0e792c28da561b61bdb6f72b128453fdecd10c767eb0b18c6a1491411d599f" protocol=ttrpc version=3 Sep 16 04:54:34.423294 systemd[1]: Started cri-containerd-b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351.scope - libcontainer container b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351. Sep 16 04:54:34.443003 containerd[1595]: time="2025-09-16T04:54:34.442938410Z" level=info msg="StartContainer for \"b4722e4b079d3c828db103785f02be20626c7b6f8e71981899af3b09c14b2351\" returns successfully" Sep 16 04:54:35.029272 kubelet[2757]: I0916 04:54:35.028782 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k7xk6" podStartSLOduration=24.028758786 podStartE2EDuration="24.028758786s" podCreationTimestamp="2025-09-16 04:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:35.011996698 +0000 UTC m=+29.281159644" watchObservedRunningTime="2025-09-16 04:54:35.028758786 +0000 UTC m=+29.297921732" Sep 16 04:55:30.085073 systemd[1]: Started sshd@7-37.27.203.193:22-139.178.89.65:59408.service - OpenSSH per-connection server daemon (139.178.89.65:59408). Sep 16 04:55:31.117904 sshd[4080]: Accepted publickey for core from 139.178.89.65 port 59408 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:31.120487 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:31.127041 systemd-logind[1567]: New session 8 of user core. Sep 16 04:55:31.131321 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:55:32.558867 sshd[4084]: Connection closed by 139.178.89.65 port 59408 Sep 16 04:55:32.559687 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:32.565932 systemd[1]: sshd@7-37.27.203.193:22-139.178.89.65:59408.service: Deactivated successfully. Sep 16 04:55:32.569505 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:55:32.571636 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:55:32.573577 systemd-logind[1567]: Removed session 8. Sep 16 04:55:37.727402 systemd[1]: Started sshd@8-37.27.203.193:22-139.178.89.65:60020.service - OpenSSH per-connection server daemon (139.178.89.65:60020). Sep 16 04:55:38.728427 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 60020 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:38.730060 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:38.734418 systemd-logind[1567]: New session 9 of user core. Sep 16 04:55:38.742464 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:55:39.521550 sshd[4101]: Connection closed by 139.178.89.65 port 60020 Sep 16 04:55:39.523043 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:39.527283 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:55:39.527550 systemd[1]: sshd@8-37.27.203.193:22-139.178.89.65:60020.service: Deactivated successfully. Sep 16 04:55:39.529880 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:55:39.531678 systemd-logind[1567]: Removed session 9. Sep 16 04:55:44.694111 systemd[1]: Started sshd@9-37.27.203.193:22-139.178.89.65:58078.service - OpenSSH per-connection server daemon (139.178.89.65:58078). Sep 16 04:55:45.690376 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 58078 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:45.692769 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:45.699786 systemd-logind[1567]: New session 10 of user core. Sep 16 04:55:45.709413 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:55:46.514933 sshd[4120]: Connection closed by 139.178.89.65 port 58078 Sep 16 04:55:46.516468 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:46.525014 systemd[1]: sshd@9-37.27.203.193:22-139.178.89.65:58078.service: Deactivated successfully. Sep 16 04:55:46.527510 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:55:46.529639 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:55:46.531869 systemd-logind[1567]: Removed session 10. Sep 16 04:55:46.721138 systemd[1]: Started sshd@10-37.27.203.193:22-139.178.89.65:58094.service - OpenSSH per-connection server daemon (139.178.89.65:58094). Sep 16 04:55:47.840724 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 58094 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:47.843301 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:47.851402 systemd-logind[1567]: New session 11 of user core. Sep 16 04:55:47.859415 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:55:48.773984 sshd[4136]: Connection closed by 139.178.89.65 port 58094 Sep 16 04:55:48.775351 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:48.785208 systemd[1]: sshd@10-37.27.203.193:22-139.178.89.65:58094.service: Deactivated successfully. Sep 16 04:55:48.787898 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:55:48.789213 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:55:48.791507 systemd-logind[1567]: Removed session 11. Sep 16 04:55:48.962379 systemd[1]: Started sshd@11-37.27.203.193:22-139.178.89.65:58096.service - OpenSSH per-connection server daemon (139.178.89.65:58096). Sep 16 04:55:50.078780 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 58096 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:50.081066 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:50.090084 systemd-logind[1567]: New session 12 of user core. Sep 16 04:55:50.095438 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:55:50.926309 sshd[4149]: Connection closed by 139.178.89.65 port 58096 Sep 16 04:55:50.927525 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:50.932445 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:55:50.932859 systemd[1]: sshd@11-37.27.203.193:22-139.178.89.65:58096.service: Deactivated successfully. Sep 16 04:55:50.936053 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:55:50.938264 systemd-logind[1567]: Removed session 12. Sep 16 04:55:56.116165 systemd[1]: Started sshd@12-37.27.203.193:22-139.178.89.65:40808.service - OpenSSH per-connection server daemon (139.178.89.65:40808). Sep 16 04:55:57.233348 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 40808 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:57.235142 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:57.243944 systemd-logind[1567]: New session 13 of user core. Sep 16 04:55:57.251502 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:55:58.092129 sshd[4164]: Connection closed by 139.178.89.65 port 40808 Sep 16 04:55:58.093031 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:58.099076 systemd[1]: sshd@12-37.27.203.193:22-139.178.89.65:40808.service: Deactivated successfully. Sep 16 04:55:58.102557 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:55:58.103961 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:55:58.106125 systemd-logind[1567]: Removed session 13. Sep 16 04:55:58.290850 systemd[1]: Started sshd@13-37.27.203.193:22-139.178.89.65:40824.service - OpenSSH per-connection server daemon (139.178.89.65:40824). Sep 16 04:55:59.381981 sshd[4176]: Accepted publickey for core from 139.178.89.65 port 40824 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:59.383773 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:59.392320 systemd-logind[1567]: New session 14 of user core. Sep 16 04:55:59.396430 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:56:00.430628 sshd[4179]: Connection closed by 139.178.89.65 port 40824 Sep 16 04:56:00.431934 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:00.440961 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:56:00.441458 systemd[1]: sshd@13-37.27.203.193:22-139.178.89.65:40824.service: Deactivated successfully. Sep 16 04:56:00.444738 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:56:00.448353 systemd-logind[1567]: Removed session 14. Sep 16 04:56:00.623446 systemd[1]: Started sshd@14-37.27.203.193:22-139.178.89.65:38544.service - OpenSSH per-connection server daemon (139.178.89.65:38544). Sep 16 04:56:01.753529 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 38544 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:01.755576 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:01.763342 systemd-logind[1567]: New session 15 of user core. Sep 16 04:56:01.769405 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:56:03.335630 sshd[4193]: Connection closed by 139.178.89.65 port 38544 Sep 16 04:56:03.336464 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:03.344123 systemd[1]: sshd@14-37.27.203.193:22-139.178.89.65:38544.service: Deactivated successfully. Sep 16 04:56:03.347081 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:56:03.350429 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:56:03.352113 systemd-logind[1567]: Removed session 15. Sep 16 04:56:03.492920 systemd[1]: Started sshd@15-37.27.203.193:22-139.178.89.65:38556.service - OpenSSH per-connection server daemon (139.178.89.65:38556). Sep 16 04:56:04.499811 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 38556 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:04.502024 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:04.510607 systemd-logind[1567]: New session 16 of user core. Sep 16 04:56:04.515495 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:56:05.471061 sshd[4213]: Connection closed by 139.178.89.65 port 38556 Sep 16 04:56:05.472006 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:05.477275 systemd[1]: sshd@15-37.27.203.193:22-139.178.89.65:38556.service: Deactivated successfully. Sep 16 04:56:05.480737 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:56:05.483578 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:56:05.485671 systemd-logind[1567]: Removed session 16. Sep 16 04:56:05.643490 systemd[1]: Started sshd@16-37.27.203.193:22-139.178.89.65:38558.service - OpenSSH per-connection server daemon (139.178.89.65:38558). Sep 16 04:56:06.631152 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 38558 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:06.633043 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:06.641679 systemd-logind[1567]: New session 17 of user core. Sep 16 04:56:06.652425 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:56:07.419028 sshd[4228]: Connection closed by 139.178.89.65 port 38558 Sep 16 04:56:07.419867 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:07.425263 systemd[1]: sshd@16-37.27.203.193:22-139.178.89.65:38558.service: Deactivated successfully. Sep 16 04:56:07.427706 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:56:07.429346 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:56:07.431719 systemd-logind[1567]: Removed session 17. Sep 16 04:56:12.590564 systemd[1]: Started sshd@17-37.27.203.193:22-139.178.89.65:50148.service - OpenSSH per-connection server daemon (139.178.89.65:50148). Sep 16 04:56:13.586026 sshd[4243]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:13.587733 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:13.594168 systemd-logind[1567]: New session 18 of user core. Sep 16 04:56:13.598437 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:56:14.326168 sshd[4248]: Connection closed by 139.178.89.65 port 50148 Sep 16 04:56:14.326902 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:14.334644 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:56:14.335063 systemd[1]: sshd@17-37.27.203.193:22-139.178.89.65:50148.service: Deactivated successfully. Sep 16 04:56:14.338038 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:56:14.340753 systemd-logind[1567]: Removed session 18. Sep 16 04:56:19.492396 systemd[1]: Started sshd@18-37.27.203.193:22-139.178.89.65:50162.service - OpenSSH per-connection server daemon (139.178.89.65:50162). Sep 16 04:56:20.471427 sshd[4260]: Accepted publickey for core from 139.178.89.65 port 50162 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:20.473239 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:20.480269 systemd-logind[1567]: New session 19 of user core. Sep 16 04:56:20.489360 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:56:21.226251 sshd[4263]: Connection closed by 139.178.89.65 port 50162 Sep 16 04:56:21.227124 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:21.235378 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:56:21.236331 systemd[1]: sshd@18-37.27.203.193:22-139.178.89.65:50162.service: Deactivated successfully. Sep 16 04:56:21.238421 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:56:21.240751 systemd-logind[1567]: Removed session 19. Sep 16 04:56:21.430697 systemd[1]: Started sshd@19-37.27.203.193:22-139.178.89.65:39164.service - OpenSSH per-connection server daemon (139.178.89.65:39164). Sep 16 04:56:22.538267 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 39164 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:22.539818 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:22.544537 systemd-logind[1567]: New session 20 of user core. Sep 16 04:56:22.548305 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:56:24.413582 kubelet[2757]: I0916 04:56:24.412931 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-56rgm" podStartSLOduration=133.412885058 podStartE2EDuration="2m13.412885058s" podCreationTimestamp="2025-09-16 04:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:35.053479422 +0000 UTC m=+29.322642378" watchObservedRunningTime="2025-09-16 04:56:24.412885058 +0000 UTC m=+138.682048004" Sep 16 04:56:24.442100 containerd[1595]: time="2025-09-16T04:56:24.441919945Z" level=info msg="StopContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" with timeout 30 (s)" Sep 16 04:56:24.443984 containerd[1595]: time="2025-09-16T04:56:24.443593045Z" level=info msg="Stop container \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" with signal terminated" Sep 16 04:56:24.469862 systemd[1]: cri-containerd-e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb.scope: Deactivated successfully. Sep 16 04:56:24.470716 systemd[1]: cri-containerd-e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb.scope: Consumed 411ms CPU time, 38.6M memory peak, 13M read from disk, 4K written to disk. Sep 16 04:56:24.473797 containerd[1595]: time="2025-09-16T04:56:24.473757152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" id:\"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" pid:3168 exited_at:{seconds:1757998584 nanos:473113079}" Sep 16 04:56:24.473891 containerd[1595]: time="2025-09-16T04:56:24.473799303Z" level=info msg="received exit event container_id:\"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" id:\"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" pid:3168 exited_at:{seconds:1757998584 nanos:473113079}" Sep 16 04:56:24.509040 containerd[1595]: time="2025-09-16T04:56:24.508996552Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:56:24.515097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb-rootfs.mount: Deactivated successfully. Sep 16 04:56:24.519606 containerd[1595]: time="2025-09-16T04:56:24.519564457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" id:\"de34d8ed0ea80e25086ce35889321ad08ce43bfb08af06c99e179ab15436c77e\" pid:4304 exited_at:{seconds:1757998584 nanos:518660100}" Sep 16 04:56:24.522847 containerd[1595]: time="2025-09-16T04:56:24.522577553Z" level=info msg="StopContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" with timeout 2 (s)" Sep 16 04:56:24.523318 containerd[1595]: time="2025-09-16T04:56:24.523157463Z" level=info msg="Stop container \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" with signal terminated" Sep 16 04:56:24.538631 systemd-networkd[1464]: lxc_health: Link DOWN Sep 16 04:56:24.538638 systemd-networkd[1464]: lxc_health: Lost carrier Sep 16 04:56:24.549585 containerd[1595]: time="2025-09-16T04:56:24.549348846Z" level=info msg="StopContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" returns successfully" Sep 16 04:56:24.550392 containerd[1595]: time="2025-09-16T04:56:24.550354516Z" level=info msg="StopPodSandbox for \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\"" Sep 16 04:56:24.550530 containerd[1595]: time="2025-09-16T04:56:24.550471668Z" level=info msg="Container to stop \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.568884 systemd[1]: cri-containerd-2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf.scope: Deactivated successfully. Sep 16 04:56:24.569603 systemd[1]: cri-containerd-2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf.scope: Consumed 5.164s CPU time, 190.2M memory peak, 70.6M read from disk, 13.3M written to disk. Sep 16 04:56:24.577653 containerd[1595]: time="2025-09-16T04:56:24.577538737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" pid:3402 exited_at:{seconds:1757998584 nanos:574908439}" Sep 16 04:56:24.578069 containerd[1595]: time="2025-09-16T04:56:24.578002216Z" level=info msg="received exit event container_id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" id:\"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" pid:3402 exited_at:{seconds:1757998584 nanos:574908439}" Sep 16 04:56:24.582206 systemd[1]: cri-containerd-82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054.scope: Deactivated successfully. Sep 16 04:56:24.588772 containerd[1595]: time="2025-09-16T04:56:24.588570491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" id:\"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" pid:2955 exit_status:137 exited_at:{seconds:1757998584 nanos:587350598}" Sep 16 04:56:24.618942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf-rootfs.mount: Deactivated successfully. Sep 16 04:56:24.630409 containerd[1595]: time="2025-09-16T04:56:24.630325571Z" level=info msg="StopContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" returns successfully" Sep 16 04:56:24.632607 containerd[1595]: time="2025-09-16T04:56:24.631450462Z" level=info msg="StopPodSandbox for \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\"" Sep 16 04:56:24.633000 containerd[1595]: time="2025-09-16T04:56:24.632730636Z" level=info msg="Container to stop \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.633000 containerd[1595]: time="2025-09-16T04:56:24.632747336Z" level=info msg="Container to stop \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.633000 containerd[1595]: time="2025-09-16T04:56:24.632757566Z" level=info msg="Container to stop \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.633000 containerd[1595]: time="2025-09-16T04:56:24.632766727Z" level=info msg="Container to stop \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.633000 containerd[1595]: time="2025-09-16T04:56:24.632775137Z" level=info msg="Container to stop \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:56:24.632679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054-rootfs.mount: Deactivated successfully. Sep 16 04:56:24.638465 systemd[1]: cri-containerd-b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d.scope: Deactivated successfully. Sep 16 04:56:24.640656 containerd[1595]: time="2025-09-16T04:56:24.640621731Z" level=info msg="shim disconnected" id=82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054 namespace=k8s.io Sep 16 04:56:24.640656 containerd[1595]: time="2025-09-16T04:56:24.640649302Z" level=warning msg="cleaning up after shim disconnected" id=82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054 namespace=k8s.io Sep 16 04:56:24.642811 containerd[1595]: time="2025-09-16T04:56:24.640654792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:56:24.659721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d-rootfs.mount: Deactivated successfully. Sep 16 04:56:24.662135 containerd[1595]: time="2025-09-16T04:56:24.662106318Z" level=info msg="received exit event sandbox_id:\"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" exit_status:137 exited_at:{seconds:1757998584 nanos:587350598}" Sep 16 04:56:24.663248 containerd[1595]: time="2025-09-16T04:56:24.662189369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" pid:3128 exit_status:137 exited_at:{seconds:1757998584 nanos:643337131}" Sep 16 04:56:24.663374 containerd[1595]: time="2025-09-16T04:56:24.663312971Z" level=info msg="TearDown network for sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" successfully" Sep 16 04:56:24.663374 containerd[1595]: time="2025-09-16T04:56:24.663327791Z" level=info msg="StopPodSandbox for \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" returns successfully" Sep 16 04:56:24.663483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054-shm.mount: Deactivated successfully. Sep 16 04:56:24.667190 containerd[1595]: time="2025-09-16T04:56:24.666683042Z" level=info msg="shim disconnected" id=b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d namespace=k8s.io Sep 16 04:56:24.667190 containerd[1595]: time="2025-09-16T04:56:24.666701023Z" level=warning msg="cleaning up after shim disconnected" id=b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d namespace=k8s.io Sep 16 04:56:24.667190 containerd[1595]: time="2025-09-16T04:56:24.666706983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:56:24.667523 containerd[1595]: time="2025-09-16T04:56:24.667395625Z" level=error msg="Failed to handle event container_id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" pid:3128 exit_status:137 exited_at:{seconds:1757998584 nanos:643337131} for b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 16 04:56:24.676991 containerd[1595]: time="2025-09-16T04:56:24.676811899Z" level=info msg="received exit event sandbox_id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" exit_status:137 exited_at:{seconds:1757998584 nanos:643337131}" Sep 16 04:56:24.676991 containerd[1595]: time="2025-09-16T04:56:24.676899421Z" level=info msg="TearDown network for sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" successfully" Sep 16 04:56:24.676991 containerd[1595]: time="2025-09-16T04:56:24.676916981Z" level=info msg="StopPodSandbox for \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" returns successfully" Sep 16 04:56:24.697605 kubelet[2757]: I0916 04:56:24.697570 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4369ee-803b-4f87-afa7-14257e03f19c-cilium-config-path\") pod \"4f4369ee-803b-4f87-afa7-14257e03f19c\" (UID: \"4f4369ee-803b-4f87-afa7-14257e03f19c\") " Sep 16 04:56:24.697605 kubelet[2757]: I0916 04:56:24.697610 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngqv8\" (UniqueName: \"kubernetes.io/projected/4f4369ee-803b-4f87-afa7-14257e03f19c-kube-api-access-ngqv8\") pod \"4f4369ee-803b-4f87-afa7-14257e03f19c\" (UID: \"4f4369ee-803b-4f87-afa7-14257e03f19c\") " Sep 16 04:56:24.700491 kubelet[2757]: I0916 04:56:24.700469 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4369ee-803b-4f87-afa7-14257e03f19c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f4369ee-803b-4f87-afa7-14257e03f19c" (UID: "4f4369ee-803b-4f87-afa7-14257e03f19c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:56:24.702689 kubelet[2757]: I0916 04:56:24.702654 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4369ee-803b-4f87-afa7-14257e03f19c-kube-api-access-ngqv8" (OuterVolumeSpecName: "kube-api-access-ngqv8") pod "4f4369ee-803b-4f87-afa7-14257e03f19c" (UID: "4f4369ee-803b-4f87-afa7-14257e03f19c"). InnerVolumeSpecName "kube-api-access-ngqv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:56:24.798802 kubelet[2757]: I0916 04:56:24.798710 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-config-path\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.798802 kubelet[2757]: I0916 04:56:24.798756 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-lib-modules\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.798802 kubelet[2757]: I0916 04:56:24.798782 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-bpf-maps\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.798802 kubelet[2757]: I0916 04:56:24.798796 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-hostproc\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.798802 kubelet[2757]: I0916 04:56:24.798818 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq8sc\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798834 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-xtables-lock\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798850 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798864 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-etc-cni-netd\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798882 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1b4b60-c763-4a97-b587-14cd802104d8-clustermesh-secrets\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798894 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cni-path\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799203 kubelet[2757]: I0916 04:56:24.798910 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-run\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.798924 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-net\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.798938 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-kernel\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.798954 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-cgroup\") pod \"bd1b4b60-c763-4a97-b587-14cd802104d8\" (UID: \"bd1b4b60-c763-4a97-b587-14cd802104d8\") " Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.799001 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngqv8\" (UniqueName: \"kubernetes.io/projected/4f4369ee-803b-4f87-afa7-14257e03f19c-kube-api-access-ngqv8\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.799011 2757 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4369ee-803b-4f87-afa7-14257e03f19c-cilium-config-path\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.799566 kubelet[2757]: I0916 04:56:24.799063 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.801077 kubelet[2757]: I0916 04:56:24.801006 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:56:24.801077 kubelet[2757]: I0916 04:56:24.801054 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.801465 kubelet[2757]: I0916 04:56:24.801405 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.801465 kubelet[2757]: I0916 04:56:24.801437 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.801653 kubelet[2757]: I0916 04:56:24.801573 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.803388 kubelet[2757]: I0916 04:56:24.803371 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.803585 kubelet[2757]: I0916 04:56:24.803464 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.803585 kubelet[2757]: I0916 04:56:24.803479 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.803585 kubelet[2757]: I0916 04:56:24.803492 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.803585 kubelet[2757]: I0916 04:56:24.803523 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:56:24.804640 kubelet[2757]: I0916 04:56:24.804603 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:56:24.805904 kubelet[2757]: I0916 04:56:24.805750 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc" (OuterVolumeSpecName: "kube-api-access-jq8sc") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "kube-api-access-jq8sc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:56:24.806123 kubelet[2757]: I0916 04:56:24.806091 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1b4b60-c763-4a97-b587-14cd802104d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd1b4b60-c763-4a97-b587-14cd802104d8" (UID: "bd1b4b60-c763-4a97-b587-14cd802104d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:56:24.899553 kubelet[2757]: I0916 04:56:24.899475 2757 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-hubble-tls\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899553 kubelet[2757]: I0916 04:56:24.899519 2757 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-etc-cni-netd\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899553 kubelet[2757]: I0916 04:56:24.899534 2757 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1b4b60-c763-4a97-b587-14cd802104d8-clustermesh-secrets\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899553 kubelet[2757]: I0916 04:56:24.899547 2757 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cni-path\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899553 kubelet[2757]: I0916 04:56:24.899562 2757 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-run\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899579 2757 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-net\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899590 2757 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-host-proc-sys-kernel\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899603 2757 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-cgroup\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899614 2757 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1b4b60-c763-4a97-b587-14cd802104d8-cilium-config-path\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899626 2757 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-lib-modules\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899640 2757 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-bpf-maps\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899660 2757 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-hostproc\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.899882 kubelet[2757]: I0916 04:56:24.899679 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq8sc\" (UniqueName: \"kubernetes.io/projected/bd1b4b60-c763-4a97-b587-14cd802104d8-kube-api-access-jq8sc\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:24.900122 kubelet[2757]: I0916 04:56:24.899700 2757 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1b4b60-c763-4a97-b587-14cd802104d8-xtables-lock\") on node \"ci-4459-0-0-n-26104e5955\" DevicePath \"\"" Sep 16 04:56:25.310030 kubelet[2757]: I0916 04:56:25.309957 2757 scope.go:117] "RemoveContainer" containerID="2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf" Sep 16 04:56:25.318285 containerd[1595]: time="2025-09-16T04:56:25.318247666Z" level=info msg="RemoveContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\"" Sep 16 04:56:25.326751 systemd[1]: Removed slice kubepods-burstable-podbd1b4b60_c763_4a97_b587_14cd802104d8.slice - libcontainer container kubepods-burstable-podbd1b4b60_c763_4a97_b587_14cd802104d8.slice. Sep 16 04:56:25.326907 systemd[1]: kubepods-burstable-podbd1b4b60_c763_4a97_b587_14cd802104d8.slice: Consumed 5.241s CPU time, 191.2M memory peak, 72.2M read from disk, 13.3M written to disk. Sep 16 04:56:25.332468 containerd[1595]: time="2025-09-16T04:56:25.332357559Z" level=info msg="RemoveContainer for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" returns successfully" Sep 16 04:56:25.333116 kubelet[2757]: I0916 04:56:25.333055 2757 scope.go:117] "RemoveContainer" containerID="b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a" Sep 16 04:56:25.339568 systemd[1]: Removed slice kubepods-besteffort-pod4f4369ee_803b_4f87_afa7_14257e03f19c.slice - libcontainer container kubepods-besteffort-pod4f4369ee_803b_4f87_afa7_14257e03f19c.slice. Sep 16 04:56:25.339727 systemd[1]: kubepods-besteffort-pod4f4369ee_803b_4f87_afa7_14257e03f19c.slice: Consumed 462ms CPU time, 38.9M memory peak, 13M read from disk, 4K written to disk. Sep 16 04:56:25.341064 containerd[1595]: time="2025-09-16T04:56:25.340984567Z" level=info msg="RemoveContainer for \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\"" Sep 16 04:56:25.370569 containerd[1595]: time="2025-09-16T04:56:25.369810334Z" level=info msg="RemoveContainer for \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" returns successfully" Sep 16 04:56:25.370761 kubelet[2757]: I0916 04:56:25.370605 2757 scope.go:117] "RemoveContainer" containerID="ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177" Sep 16 04:56:25.374039 containerd[1595]: time="2025-09-16T04:56:25.373975895Z" level=info msg="RemoveContainer for \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\"" Sep 16 04:56:25.379490 containerd[1595]: time="2025-09-16T04:56:25.379438790Z" level=info msg="RemoveContainer for \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" returns successfully" Sep 16 04:56:25.379841 kubelet[2757]: I0916 04:56:25.379803 2757 scope.go:117] "RemoveContainer" containerID="b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9" Sep 16 04:56:25.382850 containerd[1595]: time="2025-09-16T04:56:25.382220384Z" level=info msg="RemoveContainer for \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\"" Sep 16 04:56:25.386701 containerd[1595]: time="2025-09-16T04:56:25.386671060Z" level=info msg="RemoveContainer for \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" returns successfully" Sep 16 04:56:25.387167 kubelet[2757]: I0916 04:56:25.387060 2757 scope.go:117] "RemoveContainer" containerID="9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3" Sep 16 04:56:25.388988 containerd[1595]: time="2025-09-16T04:56:25.388960684Z" level=info msg="RemoveContainer for \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\"" Sep 16 04:56:25.392954 containerd[1595]: time="2025-09-16T04:56:25.392924941Z" level=info msg="RemoveContainer for \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" returns successfully" Sep 16 04:56:25.393418 kubelet[2757]: I0916 04:56:25.393365 2757 scope.go:117] "RemoveContainer" containerID="2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf" Sep 16 04:56:25.396819 containerd[1595]: time="2025-09-16T04:56:25.393637435Z" level=error msg="ContainerStatus for \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\": not found" Sep 16 04:56:25.397059 kubelet[2757]: E0916 04:56:25.396987 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\": not found" containerID="2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf" Sep 16 04:56:25.397262 kubelet[2757]: I0916 04:56:25.397059 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf"} err="failed to get container status \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c308b2e68a6a32beb3c838ae9be9878895b5697ab2c711914087ca8ffe9fbbf\": not found" Sep 16 04:56:25.397262 kubelet[2757]: I0916 04:56:25.397168 2757 scope.go:117] "RemoveContainer" containerID="b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a" Sep 16 04:56:25.397551 containerd[1595]: time="2025-09-16T04:56:25.397480908Z" level=error msg="ContainerStatus for \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\": not found" Sep 16 04:56:25.397693 kubelet[2757]: E0916 04:56:25.397646 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\": not found" containerID="b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a" Sep 16 04:56:25.397693 kubelet[2757]: I0916 04:56:25.397679 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a"} err="failed to get container status \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b442b72b6859be027646aa89cf008cdb2e45bc5e62d41d2a70c72614a46e646a\": not found" Sep 16 04:56:25.397781 kubelet[2757]: I0916 04:56:25.397696 2757 scope.go:117] "RemoveContainer" containerID="ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177" Sep 16 04:56:25.397959 containerd[1595]: time="2025-09-16T04:56:25.397905047Z" level=error msg="ContainerStatus for \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\": not found" Sep 16 04:56:25.398302 kubelet[2757]: E0916 04:56:25.398231 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\": not found" containerID="ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177" Sep 16 04:56:25.398302 kubelet[2757]: I0916 04:56:25.398259 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177"} err="failed to get container status \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce41d5ee3a279657f6d95cabf31a2b6473b540acd7de383f472a5396f7dc1177\": not found" Sep 16 04:56:25.398302 kubelet[2757]: I0916 04:56:25.398301 2757 scope.go:117] "RemoveContainer" containerID="b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9" Sep 16 04:56:25.398661 containerd[1595]: time="2025-09-16T04:56:25.398611561Z" level=error msg="ContainerStatus for \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\": not found" Sep 16 04:56:25.398750 kubelet[2757]: E0916 04:56:25.398726 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\": not found" containerID="b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9" Sep 16 04:56:25.398788 kubelet[2757]: I0916 04:56:25.398748 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9"} err="failed to get container status \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3defe0cf3626bc6cfdbcb8ccdf5b350df2f52522bc1079cdff9f1285e5142e9\": not found" Sep 16 04:56:25.398788 kubelet[2757]: I0916 04:56:25.398764 2757 scope.go:117] "RemoveContainer" containerID="9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3" Sep 16 04:56:25.399008 containerd[1595]: time="2025-09-16T04:56:25.398958067Z" level=error msg="ContainerStatus for \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\": not found" Sep 16 04:56:25.399113 kubelet[2757]: E0916 04:56:25.399072 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\": not found" containerID="9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3" Sep 16 04:56:25.399221 kubelet[2757]: I0916 04:56:25.399146 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3"} err="failed to get container status \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c68a7016429d5d43ffad42a6ecd362ce0852785593e6f8630d06d1498c3ebe3\": not found" Sep 16 04:56:25.399221 kubelet[2757]: I0916 04:56:25.399165 2757 scope.go:117] "RemoveContainer" containerID="e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb" Sep 16 04:56:25.401325 containerd[1595]: time="2025-09-16T04:56:25.401277822Z" level=info msg="RemoveContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\"" Sep 16 04:56:25.415558 containerd[1595]: time="2025-09-16T04:56:25.415478807Z" level=info msg="RemoveContainer for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" returns successfully" Sep 16 04:56:25.416044 kubelet[2757]: I0916 04:56:25.415974 2757 scope.go:117] "RemoveContainer" containerID="e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb" Sep 16 04:56:25.416396 containerd[1595]: time="2025-09-16T04:56:25.416344093Z" level=error msg="ContainerStatus for \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\": not found" Sep 16 04:56:25.416635 kubelet[2757]: E0916 04:56:25.416604 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\": not found" containerID="e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb" Sep 16 04:56:25.416674 kubelet[2757]: I0916 04:56:25.416644 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb"} err="failed to get container status \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7c130c54295b5dc1398fe77e5f566c58e2e424d6943ddf0bdad5933db5048cb\": not found" Sep 16 04:56:25.514048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d-shm.mount: Deactivated successfully. Sep 16 04:56:25.514229 systemd[1]: var-lib-kubelet-pods-bd1b4b60\x2dc763\x2d4a97\x2db587\x2d14cd802104d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:56:25.514337 systemd[1]: var-lib-kubelet-pods-bd1b4b60\x2dc763\x2d4a97\x2db587\x2d14cd802104d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:56:25.514428 systemd[1]: var-lib-kubelet-pods-bd1b4b60\x2dc763\x2d4a97\x2db587\x2d14cd802104d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djq8sc.mount: Deactivated successfully. Sep 16 04:56:25.514514 systemd[1]: var-lib-kubelet-pods-4f4369ee\x2d803b\x2d4f87\x2dafa7\x2d14257e03f19c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngqv8.mount: Deactivated successfully. Sep 16 04:56:25.839904 kubelet[2757]: I0916 04:56:25.839835 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4369ee-803b-4f87-afa7-14257e03f19c" path="/var/lib/kubelet/pods/4f4369ee-803b-4f87-afa7-14257e03f19c/volumes" Sep 16 04:56:25.840113 kubelet[2757]: I0916 04:56:25.840066 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1b4b60-c763-4a97-b587-14cd802104d8" path="/var/lib/kubelet/pods/bd1b4b60-c763-4a97-b587-14cd802104d8/volumes" Sep 16 04:56:25.942962 kubelet[2757]: E0916 04:56:25.942902 2757 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:56:26.067361 containerd[1595]: time="2025-09-16T04:56:26.067253494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" id:\"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" pid:3128 exit_status:137 exited_at:{seconds:1757998584 nanos:643337131}" Sep 16 04:56:26.534929 sshd[4278]: Connection closed by 139.178.89.65 port 39164 Sep 16 04:56:26.535681 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:26.540476 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:56:26.541298 systemd[1]: sshd@19-37.27.203.193:22-139.178.89.65:39164.service: Deactivated successfully. Sep 16 04:56:26.543836 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:56:26.546074 systemd-logind[1567]: Removed session 20. Sep 16 04:56:26.691137 systemd[1]: Started sshd@20-37.27.203.193:22-139.178.89.65:39176.service - OpenSSH per-connection server daemon (139.178.89.65:39176). Sep 16 04:56:27.677817 sshd[4431]: Accepted publickey for core from 139.178.89.65 port 39176 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:27.679516 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:27.687783 systemd-logind[1567]: New session 21 of user core. Sep 16 04:56:27.693434 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:56:28.681982 kubelet[2757]: I0916 04:56:28.681932 2757 memory_manager.go:355] "RemoveStaleState removing state" podUID="4f4369ee-803b-4f87-afa7-14257e03f19c" containerName="cilium-operator" Sep 16 04:56:28.681982 kubelet[2757]: I0916 04:56:28.681963 2757 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd1b4b60-c763-4a97-b587-14cd802104d8" containerName="cilium-agent" Sep 16 04:56:28.700247 systemd[1]: Created slice kubepods-burstable-pod2fc365f0_5fce_4f48_82b6_e5b81f6138f8.slice - libcontainer container kubepods-burstable-pod2fc365f0_5fce_4f48_82b6_e5b81f6138f8.slice. Sep 16 04:56:28.726934 kubelet[2757]: I0916 04:56:28.726527 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzvm\" (UniqueName: \"kubernetes.io/projected/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-kube-api-access-ppzvm\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.726934 kubelet[2757]: I0916 04:56:28.726571 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-cilium-config-path\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.726934 kubelet[2757]: I0916 04:56:28.726591 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-cilium-ipsec-secrets\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.726934 kubelet[2757]: I0916 04:56:28.726610 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-etc-cni-netd\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.726934 kubelet[2757]: I0916 04:56:28.726627 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-xtables-lock\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726645 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-cilium-cgroup\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726665 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-cni-path\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726685 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-cilium-run\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726702 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-lib-modules\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726719 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-clustermesh-secrets\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727259 kubelet[2757]: I0916 04:56:28.726735 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-hubble-tls\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727445 kubelet[2757]: I0916 04:56:28.726775 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-host-proc-sys-net\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727445 kubelet[2757]: I0916 04:56:28.726793 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-bpf-maps\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727445 kubelet[2757]: I0916 04:56:28.726807 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-hostproc\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.727445 kubelet[2757]: I0916 04:56:28.726824 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fc365f0-5fce-4f48-82b6-e5b81f6138f8-host-proc-sys-kernel\") pod \"cilium-txmd2\" (UID: \"2fc365f0-5fce-4f48-82b6-e5b81f6138f8\") " pod="kube-system/cilium-txmd2" Sep 16 04:56:28.906093 sshd[4434]: Connection closed by 139.178.89.65 port 39176 Sep 16 04:56:28.907416 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:28.913011 systemd[1]: sshd@20-37.27.203.193:22-139.178.89.65:39176.service: Deactivated successfully. Sep 16 04:56:28.916059 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:56:28.918270 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:56:28.920631 systemd-logind[1567]: Removed session 21. Sep 16 04:56:29.005494 containerd[1595]: time="2025-09-16T04:56:29.005240247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txmd2,Uid:2fc365f0-5fce-4f48-82b6-e5b81f6138f8,Namespace:kube-system,Attempt:0,}" Sep 16 04:56:29.036665 containerd[1595]: time="2025-09-16T04:56:29.036459792Z" level=info msg="connecting to shim cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:56:29.082455 systemd[1]: Started cri-containerd-cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925.scope - libcontainer container cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925. Sep 16 04:56:29.085933 systemd[1]: Started sshd@21-37.27.203.193:22-139.178.89.65:39190.service - OpenSSH per-connection server daemon (139.178.89.65:39190). Sep 16 04:56:29.140953 containerd[1595]: time="2025-09-16T04:56:29.140909522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txmd2,Uid:2fc365f0-5fce-4f48-82b6-e5b81f6138f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\"" Sep 16 04:56:29.145835 containerd[1595]: time="2025-09-16T04:56:29.145797163Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:56:29.153918 containerd[1595]: time="2025-09-16T04:56:29.153545118Z" level=info msg="Container 26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:29.161280 containerd[1595]: time="2025-09-16T04:56:29.161229521Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\"" Sep 16 04:56:29.163425 containerd[1595]: time="2025-09-16T04:56:29.162307325Z" level=info msg="StartContainer for \"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\"" Sep 16 04:56:29.163425 containerd[1595]: time="2025-09-16T04:56:29.162803227Z" level=info msg="connecting to shim 26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" protocol=ttrpc version=3 Sep 16 04:56:29.193509 systemd[1]: Started cri-containerd-26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2.scope - libcontainer container 26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2. Sep 16 04:56:29.242539 kubelet[2757]: I0916 04:56:29.241101 2757 setters.go:602] "Node became not ready" node="ci-4459-0-0-n-26104e5955" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:56:29Z","lastTransitionTime":"2025-09-16T04:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:56:29.248586 containerd[1595]: time="2025-09-16T04:56:29.248518093Z" level=info msg="StartContainer for \"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\" returns successfully" Sep 16 04:56:29.300601 systemd[1]: cri-containerd-26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2.scope: Deactivated successfully. Sep 16 04:56:29.301253 systemd[1]: cri-containerd-26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2.scope: Consumed 33ms CPU time, 9.7M memory peak, 3.3M read from disk. Sep 16 04:56:29.303505 containerd[1595]: time="2025-09-16T04:56:29.303460855Z" level=info msg="received exit event container_id:\"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\" id:\"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\" pid:4511 exited_at:{seconds:1757998589 nanos:303226270}" Sep 16 04:56:29.303673 containerd[1595]: time="2025-09-16T04:56:29.303656559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\" id:\"26dd1eb51db8d3587bb5cdbd8143141978bb15e52c2941cf482debca84bcb4e2\" pid:4511 exited_at:{seconds:1757998589 nanos:303226270}" Sep 16 04:56:29.337589 containerd[1595]: time="2025-09-16T04:56:29.337536865Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:56:29.350214 containerd[1595]: time="2025-09-16T04:56:29.350054128Z" level=info msg="Container c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:29.355416 containerd[1595]: time="2025-09-16T04:56:29.355393859Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\"" Sep 16 04:56:29.355869 containerd[1595]: time="2025-09-16T04:56:29.355855069Z" level=info msg="StartContainer for \"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\"" Sep 16 04:56:29.356453 containerd[1595]: time="2025-09-16T04:56:29.356412231Z" level=info msg="connecting to shim c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" protocol=ttrpc version=3 Sep 16 04:56:29.372282 systemd[1]: Started cri-containerd-c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51.scope - libcontainer container c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51. Sep 16 04:56:29.401495 containerd[1595]: time="2025-09-16T04:56:29.401300586Z" level=info msg="StartContainer for \"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\" returns successfully" Sep 16 04:56:29.404285 systemd[1]: cri-containerd-c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51.scope: Deactivated successfully. Sep 16 04:56:29.404575 systemd[1]: cri-containerd-c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51.scope: Consumed 13ms CPU time, 7.4M memory peak, 2.1M read from disk. Sep 16 04:56:29.406081 containerd[1595]: time="2025-09-16T04:56:29.406000892Z" level=info msg="received exit event container_id:\"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\" id:\"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\" pid:4557 exited_at:{seconds:1757998589 nanos:405665705}" Sep 16 04:56:29.408269 containerd[1595]: time="2025-09-16T04:56:29.408141701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\" id:\"c0a04c123e570b3d1269abcf6c3c8e00db91d86d0c1661f86a4fb497f5ac7a51\" pid:4557 exited_at:{seconds:1757998589 nanos:405665705}" Sep 16 04:56:29.835951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961882621.mount: Deactivated successfully. Sep 16 04:56:30.093129 sshd[4481]: Accepted publickey for core from 139.178.89.65 port 39190 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:30.094342 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:30.100553 systemd-logind[1567]: New session 22 of user core. Sep 16 04:56:30.103367 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:56:30.348135 containerd[1595]: time="2025-09-16T04:56:30.347901406Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:56:30.367235 containerd[1595]: time="2025-09-16T04:56:30.366393788Z" level=info msg="Container 126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:30.383829 containerd[1595]: time="2025-09-16T04:56:30.383761484Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\"" Sep 16 04:56:30.384567 containerd[1595]: time="2025-09-16T04:56:30.384348987Z" level=info msg="StartContainer for \"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\"" Sep 16 04:56:30.392653 containerd[1595]: time="2025-09-16T04:56:30.392565409Z" level=info msg="connecting to shim 126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" protocol=ttrpc version=3 Sep 16 04:56:30.422251 systemd[1]: Started cri-containerd-126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e.scope - libcontainer container 126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e. Sep 16 04:56:30.466477 systemd[1]: cri-containerd-126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e.scope: Deactivated successfully. Sep 16 04:56:30.468361 containerd[1595]: time="2025-09-16T04:56:30.467973490Z" level=info msg="StartContainer for \"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\" returns successfully" Sep 16 04:56:30.469068 containerd[1595]: time="2025-09-16T04:56:30.468654746Z" level=info msg="received exit event container_id:\"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\" id:\"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\" pid:4600 exited_at:{seconds:1757998590 nanos:467871367}" Sep 16 04:56:30.474207 containerd[1595]: time="2025-09-16T04:56:30.474092153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\" id:\"126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e\" pid:4600 exited_at:{seconds:1757998590 nanos:467871367}" Sep 16 04:56:30.502096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-126addd7cb89eade4dccaafec5197763789c2e264a77e687cbb20a199b56869e-rootfs.mount: Deactivated successfully. Sep 16 04:56:30.767014 sshd[4586]: Connection closed by 139.178.89.65 port 39190 Sep 16 04:56:30.768290 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:30.772458 systemd[1]: sshd@21-37.27.203.193:22-139.178.89.65:39190.service: Deactivated successfully. Sep 16 04:56:30.774937 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:56:30.778060 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:56:30.779872 systemd-logind[1567]: Removed session 22. Sep 16 04:56:30.944242 kubelet[2757]: E0916 04:56:30.944126 2757 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:56:30.971743 systemd[1]: Started sshd@22-37.27.203.193:22-139.178.89.65:34574.service - OpenSSH per-connection server daemon (139.178.89.65:34574). Sep 16 04:56:31.356818 containerd[1595]: time="2025-09-16T04:56:31.356738180Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:56:31.371242 containerd[1595]: time="2025-09-16T04:56:31.370234256Z" level=info msg="Container 729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:31.394586 containerd[1595]: time="2025-09-16T04:56:31.394499250Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\"" Sep 16 04:56:31.398338 containerd[1595]: time="2025-09-16T04:56:31.398240050Z" level=info msg="StartContainer for \"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\"" Sep 16 04:56:31.403479 containerd[1595]: time="2025-09-16T04:56:31.403397534Z" level=info msg="connecting to shim 729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" protocol=ttrpc version=3 Sep 16 04:56:31.440492 systemd[1]: Started cri-containerd-729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b.scope - libcontainer container 729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b. Sep 16 04:56:31.488588 systemd[1]: cri-containerd-729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b.scope: Deactivated successfully. Sep 16 04:56:31.490579 containerd[1595]: time="2025-09-16T04:56:31.490496101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\" id:\"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\" pid:4648 exited_at:{seconds:1757998591 nanos:489419396}" Sep 16 04:56:31.490696 containerd[1595]: time="2025-09-16T04:56:31.490636184Z" level=info msg="received exit event container_id:\"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\" id:\"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\" pid:4648 exited_at:{seconds:1757998591 nanos:489419396}" Sep 16 04:56:31.502886 containerd[1595]: time="2025-09-16T04:56:31.502840338Z" level=info msg="StartContainer for \"729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b\" returns successfully" Sep 16 04:56:31.529689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729fffd969a0bd313b915d171b2cb498c907b8558a80b7e12653dd5ea9e2217b-rootfs.mount: Deactivated successfully. Sep 16 04:56:32.095542 sshd[4633]: Accepted publickey for core from 139.178.89.65 port 34574 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:32.097729 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:32.106149 systemd-logind[1567]: New session 23 of user core. Sep 16 04:56:32.116457 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:56:32.363533 containerd[1595]: time="2025-09-16T04:56:32.363152145Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:56:32.393223 containerd[1595]: time="2025-09-16T04:56:32.389767744Z" level=info msg="Container d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:32.413140 containerd[1595]: time="2025-09-16T04:56:32.413019840Z" level=info msg="CreateContainer within sandbox \"cc4fa48781115dca1ce68eb0203c9548566c1730a11b466d6d143e61e9e54925\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\"" Sep 16 04:56:32.415232 containerd[1595]: time="2025-09-16T04:56:32.414172839Z" level=info msg="StartContainer for \"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\"" Sep 16 04:56:32.415915 containerd[1595]: time="2025-09-16T04:56:32.415850620Z" level=info msg="connecting to shim d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e" address="unix:///run/containerd/s/7cc17aa948e39f18491033909a30e1c7a4f23b8f1e33183a93728e58261672b0" protocol=ttrpc version=3 Sep 16 04:56:32.446467 systemd[1]: Started cri-containerd-d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e.scope - libcontainer container d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e. Sep 16 04:56:32.499414 containerd[1595]: time="2025-09-16T04:56:32.499361160Z" level=info msg="StartContainer for \"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" returns successfully" Sep 16 04:56:32.592552 containerd[1595]: time="2025-09-16T04:56:32.592412866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" id:\"25709a5dd85540172964c703a53a85ed40f2599138226e621cdc997419d96aa2\" pid:4718 exited_at:{seconds:1757998592 nanos:592085328}" Sep 16 04:56:32.963286 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 04:56:33.388374 kubelet[2757]: I0916 04:56:33.387871 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-txmd2" podStartSLOduration=5.387847935 podStartE2EDuration="5.387847935s" podCreationTimestamp="2025-09-16 04:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:56:33.385898315 +0000 UTC m=+147.655061251" watchObservedRunningTime="2025-09-16 04:56:33.387847935 +0000 UTC m=+147.657010871" Sep 16 04:56:34.960026 containerd[1595]: time="2025-09-16T04:56:34.959803633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" id:\"579aeef515730818d3ae05daa2c3c4ea32bb15a8dd35e2c0c2d9f889cb1fd7da\" pid:5004 exit_status:1 exited_at:{seconds:1757998594 nanos:959288400}" Sep 16 04:56:35.630758 systemd-networkd[1464]: lxc_health: Link UP Sep 16 04:56:35.637015 systemd-networkd[1464]: lxc_health: Gained carrier Sep 16 04:56:37.243313 containerd[1595]: time="2025-09-16T04:56:37.243089394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" id:\"9890b05afc80a00261ce701174385a5d287f6b93ff076d63bffb28c3cd05db37\" pid:5253 exited_at:{seconds:1757998597 nanos:242172698}" Sep 16 04:56:37.653459 systemd-networkd[1464]: lxc_health: Gained IPv6LL Sep 16 04:56:39.360671 containerd[1595]: time="2025-09-16T04:56:39.360468559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" id:\"f96cb81f13ff488904fa33f31366ed6bd5341ef44c5ceb547da0dbfaa1cc2142\" pid:5286 exited_at:{seconds:1757998599 nanos:360117809}" Sep 16 04:56:41.519084 containerd[1595]: time="2025-09-16T04:56:41.518941012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb389d96cff5e27610965fcff837fa2f7a9355652755633228e402b494675e\" id:\"f7132ae171651a2595be837f181cd4a380a9c9c8fa6d8814567d647421dbcc76\" pid:5313 exited_at:{seconds:1757998601 nanos:518332293}" Sep 16 04:56:41.698115 sshd[4675]: Connection closed by 139.178.89.65 port 34574 Sep 16 04:56:41.698514 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:41.705732 systemd[1]: sshd@22-37.27.203.193:22-139.178.89.65:34574.service: Deactivated successfully. Sep 16 04:56:41.706068 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:56:41.708802 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:56:41.711291 systemd-logind[1567]: Removed session 23. Sep 16 04:56:58.135236 kubelet[2757]: E0916 04:56:58.135109 2757 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52098->10.0.0.2:2379: read: connection timed out" Sep 16 04:56:58.146388 systemd[1]: cri-containerd-bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e.scope: Deactivated successfully. Sep 16 04:56:58.147846 systemd[1]: cri-containerd-bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e.scope: Consumed 2.075s CPU time, 30.7M memory peak, 11.5M read from disk. Sep 16 04:56:58.154051 containerd[1595]: time="2025-09-16T04:56:58.153885371Z" level=info msg="received exit event container_id:\"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\" id:\"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\" pid:2619 exit_status:1 exited_at:{seconds:1757998618 nanos:150480814}" Sep 16 04:56:58.156473 containerd[1595]: time="2025-09-16T04:56:58.156397794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\" id:\"bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e\" pid:2619 exit_status:1 exited_at:{seconds:1757998618 nanos:150480814}" Sep 16 04:56:58.190125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e-rootfs.mount: Deactivated successfully. Sep 16 04:56:58.313697 systemd[1]: cri-containerd-376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594.scope: Deactivated successfully. Sep 16 04:56:58.314141 systemd[1]: cri-containerd-376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594.scope: Consumed 2.814s CPU time, 74.6M memory peak, 18.7M read from disk. Sep 16 04:56:58.317240 containerd[1595]: time="2025-09-16T04:56:58.316962668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\" id:\"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\" pid:2602 exit_status:1 exited_at:{seconds:1757998618 nanos:316424797}" Sep 16 04:56:58.319402 containerd[1595]: time="2025-09-16T04:56:58.319353497Z" level=info msg="received exit event container_id:\"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\" id:\"376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594\" pid:2602 exit_status:1 exited_at:{seconds:1757998618 nanos:316424797}" Sep 16 04:56:58.349857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594-rootfs.mount: Deactivated successfully. Sep 16 04:56:58.436358 kubelet[2757]: I0916 04:56:58.435925 2757 scope.go:117] "RemoveContainer" containerID="bf16d10f1813930f11adf00c18569cf2fa78a6ac2b0d8ab307f6a4c76a866e1e" Sep 16 04:56:58.441225 containerd[1595]: time="2025-09-16T04:56:58.441065470Z" level=info msg="CreateContainer within sandbox \"b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 04:56:58.441810 kubelet[2757]: I0916 04:56:58.441736 2757 scope.go:117] "RemoveContainer" containerID="376370c969fc548bb6020ad7625a9a59b6af760d45764dd1ee4e7c44eda1d594" Sep 16 04:56:58.444937 containerd[1595]: time="2025-09-16T04:56:58.444851110Z" level=info msg="CreateContainer within sandbox \"9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 04:56:58.463238 containerd[1595]: time="2025-09-16T04:56:58.462438232Z" level=info msg="Container 4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:58.466968 containerd[1595]: time="2025-09-16T04:56:58.466728262Z" level=info msg="Container cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:56:58.478829 containerd[1595]: time="2025-09-16T04:56:58.478763288Z" level=info msg="CreateContainer within sandbox \"b2147c2b6bf17da13c0326233bc7331ca32f43bfbee164377cb9dff793d4ec00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff\"" Sep 16 04:56:58.479324 containerd[1595]: time="2025-09-16T04:56:58.479270236Z" level=info msg="StartContainer for \"4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff\"" Sep 16 04:56:58.480812 containerd[1595]: time="2025-09-16T04:56:58.480749062Z" level=info msg="connecting to shim 4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff" address="unix:///run/containerd/s/929947e15e8f672652939984ba6d2e8894ae51f02c7480dea23809af43870664" protocol=ttrpc version=3 Sep 16 04:56:58.486646 containerd[1595]: time="2025-09-16T04:56:58.486524925Z" level=info msg="CreateContainer within sandbox \"9521d4ee85e3c282e8642fa90d215f269aba965bee9bd594c48e863c2ee004b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d\"" Sep 16 04:56:58.488790 containerd[1595]: time="2025-09-16T04:56:58.488728847Z" level=info msg="StartContainer for \"cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d\"" Sep 16 04:56:58.492143 containerd[1595]: time="2025-09-16T04:56:58.492070702Z" level=info msg="connecting to shim cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d" address="unix:///run/containerd/s/b02ece756965b22c8a1501d69ea35d1a1554e88047cc9903bd31280b39b758da" protocol=ttrpc version=3 Sep 16 04:56:58.515659 systemd[1]: Started cri-containerd-4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff.scope - libcontainer container 4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff. Sep 16 04:56:58.532492 systemd[1]: Started cri-containerd-cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d.scope - libcontainer container cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d. Sep 16 04:56:58.621013 containerd[1595]: time="2025-09-16T04:56:58.620947740Z" level=info msg="StartContainer for \"4d81405a0ce38adec0e9cad746e28eebd3527bb0db7ac4769a1eea33b1b905ff\" returns successfully" Sep 16 04:56:58.650922 containerd[1595]: time="2025-09-16T04:56:58.650692534Z" level=info msg="StartContainer for \"cd523e951d3a4e19ebedf3ff5ac1203549c0d0bd8393693ed8c5961e95ab560d\" returns successfully" Sep 16 04:57:01.338748 kubelet[2757]: E0916 04:57:01.336430 2757 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51904->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-0-0-n-26104e5955.1865aa6c1b5183b9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-0-0-n-26104e5955,UID:1d03d5e41a407ba53ad179d7390ebf0c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-n-26104e5955,},FirstTimestamp:2025-09-16 04:56:50.849563577 +0000 UTC m=+165.118726523,LastTimestamp:2025-09-16 04:56:50.849563577 +0000 UTC m=+165.118726523,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-n-26104e5955,}" Sep 16 04:57:05.835625 containerd[1595]: time="2025-09-16T04:57:05.835524300Z" level=info msg="StopPodSandbox for \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\"" Sep 16 04:57:05.836123 containerd[1595]: time="2025-09-16T04:57:05.835726948Z" level=info msg="TearDown network for sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" successfully" Sep 16 04:57:05.836123 containerd[1595]: time="2025-09-16T04:57:05.835741818Z" level=info msg="StopPodSandbox for \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" returns successfully" Sep 16 04:57:05.836388 containerd[1595]: time="2025-09-16T04:57:05.836319510Z" level=info msg="RemovePodSandbox for \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\"" Sep 16 04:57:05.836388 containerd[1595]: time="2025-09-16T04:57:05.836363562Z" level=info msg="Forcibly stopping sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\"" Sep 16 04:57:05.836522 containerd[1595]: time="2025-09-16T04:57:05.836484947Z" level=info msg="TearDown network for sandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" successfully" Sep 16 04:57:05.841262 containerd[1595]: time="2025-09-16T04:57:05.841229372Z" level=info msg="Ensure that sandbox b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d in task-service has been cleanup successfully" Sep 16 04:57:05.846431 containerd[1595]: time="2025-09-16T04:57:05.846271358Z" level=info msg="RemovePodSandbox \"b35325fb38586e92fab04c436fb324f7531f10e7e3f02e6c76ae93c9ae53678d\" returns successfully" Sep 16 04:57:05.847230 containerd[1595]: time="2025-09-16T04:57:05.846977297Z" level=info msg="StopPodSandbox for \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\"" Sep 16 04:57:05.847230 containerd[1595]: time="2025-09-16T04:57:05.847104932Z" level=info msg="TearDown network for sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" successfully" Sep 16 04:57:05.847230 containerd[1595]: time="2025-09-16T04:57:05.847118052Z" level=info msg="StopPodSandbox for \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" returns successfully" Sep 16 04:57:05.847544 containerd[1595]: time="2025-09-16T04:57:05.847516197Z" level=info msg="RemovePodSandbox for \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\"" Sep 16 04:57:05.847615 containerd[1595]: time="2025-09-16T04:57:05.847549559Z" level=info msg="Forcibly stopping sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\"" Sep 16 04:57:05.847961 containerd[1595]: time="2025-09-16T04:57:05.847668323Z" level=info msg="TearDown network for sandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" successfully" Sep 16 04:57:05.849951 containerd[1595]: time="2025-09-16T04:57:05.849823777Z" level=info msg="Ensure that sandbox 82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054 in task-service has been cleanup successfully" Sep 16 04:57:05.853842 containerd[1595]: time="2025-09-16T04:57:05.853795882Z" level=info msg="RemovePodSandbox \"82bfccef92c82eb42f6f07f8f06121cda694b65a186bf03d71ef7dcfd98c1054\" returns successfully"