Sep 13 00:05:08.879837 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:05:08.879860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:08.879869 kernel: BIOS-provided physical RAM map: Sep 13 00:05:08.879875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:05:08.879881 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:05:08.879886 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:05:08.879893 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 13 00:05:08.879899 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 13 00:05:08.879906 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:05:08.879912 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:05:08.879918 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:05:08.879923 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:05:08.879929 kernel: NX (Execute Disable) protection: active Sep 13 00:05:08.879935 kernel: APIC: Static calls initialized Sep 13 00:05:08.879943 kernel: SMBIOS 2.8 present. Sep 13 00:05:08.879950 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 13 00:05:08.879956 kernel: Hypervisor detected: KVM Sep 13 00:05:08.879962 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:05:08.879968 kernel: kvm-clock: using sched offset of 3282602679 cycles Sep 13 00:05:08.879974 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:05:08.879981 kernel: tsc: Detected 2445.404 MHz processor Sep 13 00:05:08.879987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:05:08.879994 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:05:08.880002 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 13 00:05:08.880008 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:05:08.880014 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:05:08.880020 kernel: Using GB pages for direct mapping Sep 13 00:05:08.880027 kernel: ACPI: Early table checksum verification disabled Sep 13 00:05:08.880033 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 13 00:05:08.880039 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880045 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880051 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880059 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 13 00:05:08.880065 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880071 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880078 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880084 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:08.880090 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 13 00:05:08.880097 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 13 00:05:08.880103 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 13 00:05:08.880113 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 13 00:05:08.880120 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 13 00:05:08.880126 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 13 00:05:08.880133 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 13 00:05:08.880139 kernel: No NUMA configuration found Sep 13 00:05:08.880146 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 13 00:05:08.880153 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 13 00:05:08.880160 kernel: Zone ranges: Sep 13 00:05:08.880167 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:05:08.880173 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 13 00:05:08.880179 kernel: Normal empty Sep 13 00:05:08.880186 kernel: Movable zone start for each node Sep 13 00:05:08.880192 kernel: Early memory node ranges Sep 13 00:05:08.880199 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:05:08.880205 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 13 00:05:08.880211 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 13 00:05:08.880219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:05:08.880226 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:05:08.880232 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:05:08.880239 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:05:08.880245 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:05:08.880252 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:05:08.880258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:05:08.880265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:05:08.880271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:05:08.880279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:05:08.880285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:05:08.880292 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:05:08.880298 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:05:08.880305 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:05:08.880311 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:05:08.880318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:05:08.880339 kernel: Booting paravirtualized kernel on KVM Sep 13 00:05:08.880346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:05:08.880355 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:05:08.880362 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:05:08.880369 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:05:08.880375 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:05:08.880381 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:05:08.880389 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:08.880396 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:05:08.880402 kernel: random: crng init done Sep 13 00:05:08.880415 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:05:08.880428 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:05:08.880447 kernel: Fallback order for Node 0: 0 Sep 13 00:05:08.880462 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 13 00:05:08.880474 kernel: Policy zone: DMA32 Sep 13 00:05:08.880485 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:05:08.880510 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:05:08.880521 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:05:08.880534 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:05:08.880549 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:05:08.880555 kernel: Dynamic Preempt: voluntary Sep 13 00:05:08.880562 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:05:08.880570 kernel: rcu: RCU event tracing is enabled. Sep 13 00:05:08.880577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:05:08.880584 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:05:08.880591 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:05:08.880597 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:05:08.880604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:05:08.880611 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:05:08.880619 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:05:08.880626 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:05:08.880632 kernel: Console: colour VGA+ 80x25 Sep 13 00:05:08.880638 kernel: printk: console [tty0] enabled Sep 13 00:05:08.880645 kernel: printk: console [ttyS0] enabled Sep 13 00:05:08.880651 kernel: ACPI: Core revision 20230628 Sep 13 00:05:08.880658 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:05:08.880665 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:05:08.880671 kernel: x2apic enabled Sep 13 00:05:08.880680 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:05:08.880686 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:05:08.880692 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:05:08.880699 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Sep 13 00:05:08.880706 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:05:08.880712 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:05:08.880719 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:05:08.880726 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:05:08.880738 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:05:08.880745 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:05:08.880752 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:05:08.880761 kernel: active return thunk: retbleed_return_thunk Sep 13 00:05:08.880767 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:05:08.880774 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:05:08.880781 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:05:08.880788 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:05:08.880796 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:05:08.880804 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:05:08.880811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:05:08.880818 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:05:08.880825 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:05:08.880832 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:05:08.880839 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:05:08.880846 kernel: landlock: Up and running. Sep 13 00:05:08.880852 kernel: SELinux: Initializing. Sep 13 00:05:08.880861 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:08.880868 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:08.880875 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:05:08.880882 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:08.880889 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:08.880896 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:08.880903 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:05:08.880909 kernel: ... version: 0 Sep 13 00:05:08.880916 kernel: ... bit width: 48 Sep 13 00:05:08.880924 kernel: ... generic registers: 6 Sep 13 00:05:08.880931 kernel: ... value mask: 0000ffffffffffff Sep 13 00:05:08.880938 kernel: ... max period: 00007fffffffffff Sep 13 00:05:08.880945 kernel: ... fixed-purpose events: 0 Sep 13 00:05:08.880952 kernel: ... event mask: 000000000000003f Sep 13 00:05:08.880958 kernel: signal: max sigframe size: 1776 Sep 13 00:05:08.880965 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:05:08.880972 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:05:08.880979 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:05:08.880987 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:05:08.881006 kernel: .... node #0, CPUs: #1 Sep 13 00:05:08.881014 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:05:08.881021 kernel: smpboot: Max logical packages: 1 Sep 13 00:05:08.881028 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Sep 13 00:05:08.881035 kernel: devtmpfs: initialized Sep 13 00:05:08.881042 kernel: x86/mm: Memory block size: 128MB Sep 13 00:05:08.881049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:05:08.881056 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:05:08.881064 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:05:08.881071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:05:08.881078 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:05:08.881085 kernel: audit: type=2000 audit(1757721907.613:1): state=initialized audit_enabled=0 res=1 Sep 13 00:05:08.881092 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:05:08.881099 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:05:08.881105 kernel: cpuidle: using governor menu Sep 13 00:05:08.881113 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:05:08.881120 kernel: dca service started, version 1.12.1 Sep 13 00:05:08.881128 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:05:08.881135 kernel: PCI: Using configuration type 1 for base access Sep 13 00:05:08.881142 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:05:08.881149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:05:08.881156 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:05:08.881163 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:05:08.881170 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:05:08.881177 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:05:08.881183 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:05:08.881192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:05:08.881198 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:05:08.881205 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:05:08.881212 kernel: ACPI: Interpreter enabled Sep 13 00:05:08.881219 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:05:08.881226 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:05:08.881233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:05:08.881240 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:05:08.881247 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:05:08.881255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:05:08.881437 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:05:08.881560 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:05:08.881699 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:05:08.881713 kernel: PCI host bridge to bus 0000:00 Sep 13 00:05:08.881797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:05:08.881877 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:05:08.881955 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:05:08.882021 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 13 00:05:08.882087 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:05:08.882152 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:05:08.882216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:05:08.882306 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:05:08.882426 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:05:08.882521 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 13 00:05:08.882599 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 13 00:05:08.882675 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 13 00:05:08.882754 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 13 00:05:08.882829 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:05:08.882913 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.882997 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 13 00:05:08.883079 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.883155 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 13 00:05:08.883240 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.883317 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 13 00:05:08.883434 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.883538 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 13 00:05:08.883623 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.883698 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 13 00:05:08.883780 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.883855 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 13 00:05:08.883934 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.884014 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 13 00:05:08.884095 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.884170 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 13 00:05:08.884251 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:08.884346 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 13 00:05:08.884433 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:05:08.884531 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:05:08.884614 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:05:08.884689 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 13 00:05:08.884764 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 13 00:05:08.884847 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:05:08.884921 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:05:08.885006 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:05:08.885092 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 13 00:05:08.885169 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 00:05:08.885245 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 13 00:05:08.885346 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:05:08.885431 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:08.885529 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:08.885617 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:05:08.885701 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 13 00:05:08.885778 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:05:08.885853 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:08.885927 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:08.886011 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:05:08.886090 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 13 00:05:08.886171 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 13 00:05:08.886247 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:05:08.886394 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:08.886482 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:08.886589 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:05:08.886670 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 00:05:08.886745 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:05:08.886825 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:08.886899 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:08.886982 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:05:08.887059 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 13 00:05:08.887133 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:05:08.887206 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:08.887279 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:08.887439 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:05:08.887544 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 13 00:05:08.887623 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 13 00:05:08.887697 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:05:08.887770 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:05:08.887843 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:08.887853 kernel: acpiphp: Slot [0] registered Sep 13 00:05:08.887936 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:05:08.888019 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 13 00:05:08.888097 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 13 00:05:08.888173 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 13 00:05:08.888248 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:05:08.888340 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:08.888422 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:08.888432 kernel: acpiphp: Slot [0-2] registered Sep 13 00:05:08.888521 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:05:08.888603 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:08.888678 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:08.888688 kernel: acpiphp: Slot [0-3] registered Sep 13 00:05:08.888759 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:05:08.888832 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:08.888906 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:08.888916 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:05:08.888923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:05:08.888933 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:05:08.888940 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:05:08.888947 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:05:08.888954 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:05:08.888961 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:05:08.888968 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:05:08.888975 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:05:08.888981 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:05:08.888988 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:05:08.888997 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:05:08.889003 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:05:08.889010 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:05:08.889017 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:05:08.889024 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:05:08.889031 kernel: iommu: Default domain type: Translated Sep 13 00:05:08.889038 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:05:08.889045 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:05:08.889052 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:05:08.889060 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:05:08.889067 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 13 00:05:08.889142 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:05:08.889217 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:05:08.889291 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:05:08.889301 kernel: vgaarb: loaded Sep 13 00:05:08.889308 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:05:08.889316 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:05:08.889348 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:05:08.889359 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:05:08.889367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:05:08.889374 kernel: pnp: PnP ACPI init Sep 13 00:05:08.889470 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:05:08.889482 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:05:08.889504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:05:08.889512 kernel: NET: Registered PF_INET protocol family Sep 13 00:05:08.889519 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:05:08.889529 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:05:08.889536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:05:08.889544 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:05:08.889551 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:05:08.889558 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:05:08.889565 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:08.889572 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:08.889579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:05:08.889586 kernel: NET: Registered PF_XDP protocol family Sep 13 00:05:08.889669 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:05:08.889747 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:05:08.889822 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:05:08.889897 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:05:08.889971 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:05:08.890045 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:05:08.890119 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:05:08.890198 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:08.890271 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:08.890411 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:05:08.890512 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:08.890590 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:08.890665 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:05:08.890739 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:08.890811 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:08.890891 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:05:08.890964 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:08.891036 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:08.891108 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:05:08.891180 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:08.891251 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:08.891414 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:05:08.891522 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:05:08.891614 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:08.891692 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:05:08.891766 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 13 00:05:08.891839 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:08.891913 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:08.891985 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:05:08.892057 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 13 00:05:08.892130 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:08.892202 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:08.892280 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:05:08.892668 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 13 00:05:08.892756 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:08.892832 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:08.892913 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:05:08.892981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:05:08.893049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:05:08.893117 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 13 00:05:08.893183 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:05:08.893251 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:05:08.893390 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:08.893872 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:08.896380 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:08.896474 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:08.896600 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:08.896677 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:08.896762 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:08.896834 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:08.896910 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:08.896983 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:08.897058 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 00:05:08.897131 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:08.897210 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 13 00:05:08.897597 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:08.897752 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:08.897841 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 13 00:05:08.897912 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:08.897981 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:08.898056 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 13 00:05:08.898133 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:08.898201 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:08.898213 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:05:08.898221 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:05:08.898229 kernel: Initialise system trusted keyrings Sep 13 00:05:08.898237 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:05:08.898245 kernel: Key type asymmetric registered Sep 13 00:05:08.898252 kernel: Asymmetric key parser 'x509' registered Sep 13 00:05:08.898260 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:05:08.898271 kernel: io scheduler mq-deadline registered Sep 13 00:05:08.898278 kernel: io scheduler kyber registered Sep 13 00:05:08.898286 kernel: io scheduler bfq registered Sep 13 00:05:08.899185 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 00:05:08.899281 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 00:05:08.899537 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 00:05:08.899621 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 00:05:08.899703 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 00:05:08.899813 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 00:05:08.899927 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 00:05:08.900006 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 00:05:08.900084 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 00:05:08.900158 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 00:05:08.900235 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 00:05:08.900310 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 00:05:08.900459 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 00:05:08.900567 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 00:05:08.900649 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 00:05:08.900726 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 00:05:08.900738 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:05:08.900813 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 13 00:05:08.900890 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 13 00:05:08.900904 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:05:08.900912 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 13 00:05:08.900920 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:05:08.900929 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:05:08.900937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:05:08.900945 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:05:08.900952 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:05:08.900960 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:05:08.901042 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:05:08.901114 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:05:08.901183 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:05:08 UTC (1757721908) Sep 13 00:05:08.901256 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:05:08.901267 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:05:08.901275 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:05:08.901282 kernel: Segment Routing with IPv6 Sep 13 00:05:08.901290 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:05:08.901300 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:05:08.901307 kernel: Key type dns_resolver registered Sep 13 00:05:08.901315 kernel: IPI shorthand broadcast: enabled Sep 13 00:05:08.903441 kernel: sched_clock: Marking stable (1079007237, 132960947)->(1219664032, -7695848) Sep 13 00:05:08.903459 kernel: registered taskstats version 1 Sep 13 00:05:08.903468 kernel: Loading compiled-in X.509 certificates Sep 13 00:05:08.903476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:05:08.903484 kernel: Key type .fscrypt registered Sep 13 00:05:08.903509 kernel: Key type fscrypt-provisioning registered Sep 13 00:05:08.903517 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:05:08.903525 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:05:08.903533 kernel: ima: No architecture policies found Sep 13 00:05:08.903543 kernel: clk: Disabling unused clocks Sep 13 00:05:08.903551 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:05:08.903559 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:05:08.903566 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:05:08.903574 kernel: Run /init as init process Sep 13 00:05:08.903582 kernel: with arguments: Sep 13 00:05:08.903590 kernel: /init Sep 13 00:05:08.903597 kernel: with environment: Sep 13 00:05:08.903605 kernel: HOME=/ Sep 13 00:05:08.903612 kernel: TERM=linux Sep 13 00:05:08.903622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:05:08.903633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:08.903644 systemd[1]: Detected virtualization kvm. Sep 13 00:05:08.903652 systemd[1]: Detected architecture x86-64. Sep 13 00:05:08.903660 systemd[1]: Running in initrd. Sep 13 00:05:08.903668 systemd[1]: No hostname configured, using default hostname. Sep 13 00:05:08.903676 systemd[1]: Hostname set to . Sep 13 00:05:08.903686 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:08.903695 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:05:08.903703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:08.903711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:08.903720 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:05:08.903729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:08.903737 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:05:08.903745 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:05:08.903759 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:05:08.903774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:05:08.903789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:08.903803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:08.903818 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:08.903833 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:08.903841 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:08.903852 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:08.903861 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:08.903869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:08.903877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:05:08.903885 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:05:08.903893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:08.903901 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:08.903909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:08.903917 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:08.903927 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:05:08.903935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:08.903943 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:05:08.903952 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:05:08.903960 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:08.903968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:08.903999 systemd-journald[187]: Collecting audit messages is disabled. Sep 13 00:05:08.904023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:08.904031 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:08.904040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:08.904048 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:05:08.904059 systemd-journald[187]: Journal started Sep 13 00:05:08.904078 systemd-journald[187]: Runtime Journal (/run/log/journal/9d934fa480794eb19836ea95adcc3283) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:05:08.897693 systemd-modules-load[188]: Inserted module 'overlay' Sep 13 00:05:08.911337 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:05:08.921345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:05:08.921923 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 13 00:05:08.949641 kernel: Bridge firewalling registered Sep 13 00:05:08.949662 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:08.954581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:08.955194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:08.959002 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:08.963428 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:08.965471 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:08.967599 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:08.971741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:08.976006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:08.980850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:08.983367 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:05:08.984552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:08.985137 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:08.992609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:08.999852 dracut-cmdline[218]: dracut-dracut-053 Sep 13 00:05:09.001272 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:09.013581 systemd-resolved[221]: Positive Trust Anchors: Sep 13 00:05:09.013592 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:09.013617 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:09.016841 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 13 00:05:09.022826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:09.023542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:09.053359 kernel: SCSI subsystem initialized Sep 13 00:05:09.060353 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:05:09.069354 kernel: iscsi: registered transport (tcp) Sep 13 00:05:09.089583 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:05:09.089641 kernel: QLogic iSCSI HBA Driver Sep 13 00:05:09.118465 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:09.125467 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:05:09.148045 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:05:09.148118 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:05:09.148131 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:05:09.184362 kernel: raid6: avx2x4 gen() 36262 MB/s Sep 13 00:05:09.201353 kernel: raid6: avx2x2 gen() 33429 MB/s Sep 13 00:05:09.218540 kernel: raid6: avx2x1 gen() 28048 MB/s Sep 13 00:05:09.218594 kernel: raid6: using algorithm avx2x4 gen() 36262 MB/s Sep 13 00:05:09.236550 kernel: raid6: .... xor() 4976 MB/s, rmw enabled Sep 13 00:05:09.236596 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:05:09.258354 kernel: xor: automatically using best checksumming function avx Sep 13 00:05:09.372356 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:05:09.379546 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:09.386455 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:09.397900 systemd-udevd[404]: Using default interface naming scheme 'v255'. Sep 13 00:05:09.401399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:09.410445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:05:09.419928 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:05:09.441172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:09.446438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:09.480808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:09.487474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:05:09.503111 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:09.504909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:09.506412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:09.507817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:09.516570 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:05:09.527171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:09.545363 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:05:09.545422 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:05:09.557923 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:05:09.598256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:09.598417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:09.600183 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:09.606073 kernel: ACPI: bus type USB registered Sep 13 00:05:09.606093 kernel: usbcore: registered new interface driver usbfs Sep 13 00:05:09.606101 kernel: usbcore: registered new interface driver hub Sep 13 00:05:09.606109 kernel: usbcore: registered new device driver usb Sep 13 00:05:09.603776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:09.603872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:09.607723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:09.615346 kernel: libata version 3.00 loaded. Sep 13 00:05:09.615548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:09.626396 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:05:09.631389 kernel: AES CTR mode by8 optimization enabled Sep 13 00:05:09.632335 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:05:09.632486 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:05:09.636371 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:05:09.637342 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:05:09.637453 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:05:09.637552 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:05:09.638470 kernel: hub 1-0:1.0: USB hub found Sep 13 00:05:09.638611 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:05:09.645469 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:05:09.647405 kernel: hub 2-0:1.0: USB hub found Sep 13 00:05:09.647541 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:05:09.655683 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:05:09.655819 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:05:09.655830 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:05:09.655914 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:05:09.660360 kernel: scsi host1: ahci Sep 13 00:05:09.660485 kernel: scsi host2: ahci Sep 13 00:05:09.660600 kernel: scsi host3: ahci Sep 13 00:05:09.661351 kernel: scsi host4: ahci Sep 13 00:05:09.662337 kernel: scsi host5: ahci Sep 13 00:05:09.664362 kernel: scsi host6: ahci Sep 13 00:05:09.664468 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Sep 13 00:05:09.664478 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Sep 13 00:05:09.664486 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Sep 13 00:05:09.664511 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Sep 13 00:05:09.664519 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Sep 13 00:05:09.664526 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Sep 13 00:05:09.710397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:09.715456 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:09.728318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:09.877354 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:05:09.981347 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:09.981424 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:09.981436 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:09.981445 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:09.983350 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:09.983390 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:05:09.985788 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:05:09.985812 kernel: ata1.00: applying bridge limits Sep 13 00:05:09.987944 kernel: ata1.00: configured for UDMA/100 Sep 13 00:05:09.988622 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:05:10.023369 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 13 00:05:10.028122 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:05:10.031352 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:05:10.031630 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 13 00:05:10.039353 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:05:10.039761 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:05:10.047520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:05:10.047591 kernel: GPT:17805311 != 80003071 Sep 13 00:05:10.049387 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:05:10.051385 kernel: GPT:17805311 != 80003071 Sep 13 00:05:10.053852 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:05:10.057823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:10.061365 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:05:10.088979 kernel: usbcore: registered new interface driver usbhid Sep 13 00:05:10.089041 kernel: usbhid: USB HID core driver Sep 13 00:05:10.108860 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:05:10.109142 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:05:10.121760 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 00:05:10.121811 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (465) Sep 13 00:05:10.135130 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:05:10.141391 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (463) Sep 13 00:05:10.141426 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:05:10.152725 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:05:10.154753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:05:10.169359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:05:10.174743 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:05:10.176896 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:05:10.189474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:05:10.195416 disk-uuid[576]: Primary Header is updated. Sep 13 00:05:10.195416 disk-uuid[576]: Secondary Entries is updated. Sep 13 00:05:10.195416 disk-uuid[576]: Secondary Header is updated. Sep 13 00:05:10.202399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:10.213355 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:10.222360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:11.225031 disk-uuid[577]: The operation has completed successfully. Sep 13 00:05:11.226059 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:11.297750 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:05:11.297899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:05:11.326539 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:05:11.331292 sh[597]: Success Sep 13 00:05:11.352621 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:05:11.431026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:05:11.439946 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:05:11.442930 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:05:11.477828 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:05:11.477896 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:11.480877 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:05:11.484017 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:05:11.486389 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:05:11.501427 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:05:11.505606 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:05:11.507378 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:05:11.512586 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:05:11.515611 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:05:11.541945 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:11.542017 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:11.544949 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:11.556995 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:11.557061 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:11.574311 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:05:11.578202 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:11.586208 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:05:11.594633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:05:11.652533 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:11.664647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:11.684914 ignition[725]: Ignition 2.19.0 Sep 13 00:05:11.684924 ignition[725]: Stage: fetch-offline Sep 13 00:05:11.684965 ignition[725]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:11.684972 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:11.685045 ignition[725]: parsed url from cmdline: "" Sep 13 00:05:11.685047 ignition[725]: no config URL provided Sep 13 00:05:11.685051 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:11.689212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:11.685056 ignition[725]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:11.685061 ignition[725]: failed to fetch config: resource requires networking Sep 13 00:05:11.686046 ignition[725]: Ignition finished successfully Sep 13 00:05:11.697525 systemd-networkd[778]: lo: Link UP Sep 13 00:05:11.697535 systemd-networkd[778]: lo: Gained carrier Sep 13 00:05:11.699612 systemd-networkd[778]: Enumeration completed Sep 13 00:05:11.699708 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:11.700403 systemd[1]: Reached target network.target - Network. Sep 13 00:05:11.700846 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:11.700851 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:11.701716 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:11.701720 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:11.703374 systemd-networkd[778]: eth0: Link UP Sep 13 00:05:11.703377 systemd-networkd[778]: eth0: Gained carrier Sep 13 00:05:11.703383 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:11.707609 systemd-networkd[778]: eth1: Link UP Sep 13 00:05:11.707613 systemd-networkd[778]: eth1: Gained carrier Sep 13 00:05:11.707621 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:11.710476 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:05:11.721883 ignition[785]: Ignition 2.19.0 Sep 13 00:05:11.721896 ignition[785]: Stage: fetch Sep 13 00:05:11.722097 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:11.722112 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:11.722186 ignition[785]: parsed url from cmdline: "" Sep 13 00:05:11.722188 ignition[785]: no config URL provided Sep 13 00:05:11.722193 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:11.722199 ignition[785]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:11.722215 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:05:11.723923 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:05:11.741373 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:05:11.765425 systemd-networkd[778]: eth0: DHCPv4 address 37.27.206.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:05:11.924014 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:05:11.930754 ignition[785]: GET result: OK Sep 13 00:05:11.930815 ignition[785]: parsing config with SHA512: cbe02295356838ede34c3100663af17b0f9ae12750887fdd9aeed0636078d820aa166e40a1da077e9994ea519bf483097bc75f2007c53471d1e4ff5bc91602ef Sep 13 00:05:11.935784 unknown[785]: fetched base config from "system" Sep 13 00:05:11.935793 unknown[785]: fetched base config from "system" Sep 13 00:05:11.936128 ignition[785]: fetch: fetch complete Sep 13 00:05:11.935798 unknown[785]: fetched user config from "hetzner" Sep 13 00:05:11.936134 ignition[785]: fetch: fetch passed Sep 13 00:05:11.938499 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:05:11.936170 ignition[785]: Ignition finished successfully Sep 13 00:05:11.943445 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:05:11.956892 ignition[793]: Ignition 2.19.0 Sep 13 00:05:11.956906 ignition[793]: Stage: kargs Sep 13 00:05:11.957122 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:11.957134 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:11.958897 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:05:11.957968 ignition[793]: kargs: kargs passed Sep 13 00:05:11.958005 ignition[793]: Ignition finished successfully Sep 13 00:05:11.964550 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:05:11.975802 ignition[800]: Ignition 2.19.0 Sep 13 00:05:11.975814 ignition[800]: Stage: disks Sep 13 00:05:11.981680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:05:11.976002 ignition[800]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:11.982683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:11.976012 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:11.983254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:05:11.977081 ignition[800]: disks: disks passed Sep 13 00:05:11.983904 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:11.977119 ignition[800]: Ignition finished successfully Sep 13 00:05:11.985018 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:11.986157 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:11.996458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:05:12.008640 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:05:12.010116 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:05:12.014450 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:05:12.085349 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:05:12.085408 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:05:12.086181 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:12.092377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:12.094039 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:05:12.097455 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:05:12.098772 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:05:12.098800 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:12.100267 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:05:12.104568 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:05:12.111582 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Sep 13 00:05:12.116349 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:12.116377 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:12.117913 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:12.127300 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:12.127379 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:12.131978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:12.158061 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:05:12.159042 coreos-metadata[819]: Sep 13 00:05:12.158 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:05:12.160876 coreos-metadata[819]: Sep 13 00:05:12.159 INFO Fetch successful Sep 13 00:05:12.161475 coreos-metadata[819]: Sep 13 00:05:12.161 INFO wrote hostname ci-4081-3-5-n-bd9936ab3a to /sysroot/etc/hostname Sep 13 00:05:12.162111 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:05:12.166342 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:05:12.169463 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:05:12.172851 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:05:12.241340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:12.247418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:05:12.250768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:05:12.259353 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:12.275760 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:05:12.281538 ignition[933]: INFO : Ignition 2.19.0 Sep 13 00:05:12.281538 ignition[933]: INFO : Stage: mount Sep 13 00:05:12.282913 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:12.282913 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:12.282913 ignition[933]: INFO : mount: mount passed Sep 13 00:05:12.282913 ignition[933]: INFO : Ignition finished successfully Sep 13 00:05:12.283356 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:05:12.288424 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:05:12.474845 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:05:12.481573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:12.516373 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Sep 13 00:05:12.522289 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:12.522358 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:12.526780 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:12.535192 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:12.535238 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:12.539243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:12.575777 ignition[961]: INFO : Ignition 2.19.0 Sep 13 00:05:12.577233 ignition[961]: INFO : Stage: files Sep 13 00:05:12.579416 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:12.579416 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:12.582208 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:05:12.583663 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:05:12.583663 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:05:12.586746 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:05:12.588207 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:05:12.588207 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:05:12.588081 unknown[961]: wrote ssh authorized keys file for user: core Sep 13 00:05:12.592546 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:05:12.592546 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:05:12.592546 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:12.592546 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:12.830193 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:05:13.065654 systemd-networkd[778]: eth0: Gained IPv6LL Sep 13 00:05:13.129461 systemd-networkd[778]: eth1: Gained IPv6LL Sep 13 00:05:13.253308 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:13.253308 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:13.255711 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:13.515883 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:05:13.552419 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:13.552419 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:13.554578 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:05:14.105432 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:05:15.516117 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:15.516117 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:05:15.518419 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:15.519635 ignition[961]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:15.543369 ignition[961]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:15.543369 ignition[961]: INFO : files: files passed Sep 13 00:05:15.543369 ignition[961]: INFO : Ignition finished successfully Sep 13 00:05:15.521797 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:05:15.532551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:05:15.536568 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:05:15.555784 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:15.555784 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:15.545208 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:05:15.560510 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:15.545367 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:05:15.552800 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:15.554911 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:05:15.566496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:05:15.590084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:05:15.590259 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:05:15.592050 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:05:15.593922 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:05:15.595637 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:05:15.600487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:05:15.620210 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:15.628475 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:05:15.642052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:15.646087 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:15.646955 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:05:15.647947 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:05:15.648118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:15.658812 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:05:15.660209 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:05:15.663791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:05:15.665386 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:15.667208 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:15.669379 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:05:15.671432 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:15.673481 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:05:15.675742 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:05:15.677758 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:05:15.679456 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:05:15.679617 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:15.681858 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:15.683114 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:15.685116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:05:15.687587 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:15.688459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:05:15.688615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:15.691048 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:05:15.691158 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:15.692112 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:05:15.692207 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:05:15.693688 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:05:15.693780 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:05:15.706986 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:05:15.711290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:05:15.721295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:05:15.727084 ignition[1015]: INFO : Ignition 2.19.0 Sep 13 00:05:15.727084 ignition[1015]: INFO : Stage: umount Sep 13 00:05:15.727084 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:15.727084 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:15.727084 ignition[1015]: INFO : umount: umount passed Sep 13 00:05:15.727084 ignition[1015]: INFO : Ignition finished successfully Sep 13 00:05:15.721559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:15.722752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:05:15.722915 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:15.727946 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:05:15.728036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:05:15.731388 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:05:15.731493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:05:15.733907 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:05:15.733951 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:05:15.738505 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:05:15.738560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:05:15.740880 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:05:15.740920 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:05:15.741988 systemd[1]: Stopped target network.target - Network. Sep 13 00:05:15.744641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:05:15.744686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:15.745863 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:05:15.748187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:05:15.753381 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:15.754599 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:05:15.755643 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:05:15.756794 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:05:15.756833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:15.758052 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:05:15.758085 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:15.759057 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:05:15.759091 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:05:15.760074 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:05:15.760108 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:15.761216 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:05:15.762412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:05:15.764191 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:05:15.764761 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:05:15.764848 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:05:15.765374 systemd-networkd[778]: eth1: DHCPv6 lease lost Sep 13 00:05:15.767057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:05:15.767114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:15.768019 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:05:15.768099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:05:15.769380 systemd-networkd[778]: eth0: DHCPv6 lease lost Sep 13 00:05:15.770786 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:05:15.770883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:05:15.772504 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:05:15.772552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:15.777435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:05:15.778451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:05:15.778506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:15.779696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:05:15.779748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:15.780892 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:05:15.780930 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:15.781985 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:05:15.782018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:15.783306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:15.796015 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:05:15.796129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:05:15.802821 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:05:15.802929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:15.804181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:05:15.804213 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:15.805305 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:05:15.805345 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:15.806503 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:05:15.806546 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:15.808219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:05:15.808253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:15.809512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:15.809560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:15.817455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:05:15.818069 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:05:15.818151 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:15.818816 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:05:15.818856 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:15.821415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:05:15.821452 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:15.822209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:15.822241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:15.823640 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:05:15.823699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:05:15.825101 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:05:15.839488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:05:15.847630 systemd[1]: Switching root. Sep 13 00:05:15.883103 systemd-journald[187]: Journal stopped Sep 13 00:05:16.805647 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 13 00:05:16.805719 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:05:16.805730 kernel: SELinux: policy capability open_perms=1 Sep 13 00:05:16.805738 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:05:16.805748 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:05:16.805755 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:05:16.805766 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:05:16.805773 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:05:16.805784 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:05:16.805795 kernel: audit: type=1403 audit(1757721916.044:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:05:16.805804 systemd[1]: Successfully loaded SELinux policy in 51.612ms. Sep 13 00:05:16.805819 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.681ms. Sep 13 00:05:16.805828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:16.805838 systemd[1]: Detected virtualization kvm. Sep 13 00:05:16.805849 systemd[1]: Detected architecture x86-64. Sep 13 00:05:16.805857 systemd[1]: Detected first boot. Sep 13 00:05:16.805866 systemd[1]: Hostname set to . Sep 13 00:05:16.805874 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:16.805882 zram_generator::config[1076]: No configuration found. Sep 13 00:05:16.805891 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:05:16.805900 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:05:16.805909 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:05:16.805919 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:05:16.805927 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:05:16.805935 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:05:16.805943 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:05:16.805951 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:05:16.805959 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:05:16.805968 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:05:16.805977 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:05:16.805985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:16.805993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:16.806001 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:05:16.806009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:05:16.806017 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:05:16.806026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:16.806034 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:05:16.806042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:16.806052 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:05:16.806060 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:16.806068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:16.806076 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:16.806084 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:16.806092 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:05:16.806101 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:05:16.806110 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:05:16.806119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:05:16.806127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:16.806138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:16.806149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:16.806157 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:05:16.806166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:05:16.806174 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:05:16.806183 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:05:16.806191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:16.806199 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:05:16.806207 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:05:16.806215 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:05:16.806223 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:05:16.806233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:16.806241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:16.806249 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:05:16.806257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:16.806265 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:16.806273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:16.806282 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:05:16.806290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:16.806298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:16.806308 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:05:16.806317 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:05:16.806340 kernel: fuse: init (API version 7.39) Sep 13 00:05:16.806348 kernel: ACPI: bus type drm_connector registered Sep 13 00:05:16.806356 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:16.806364 kernel: loop: module loaded Sep 13 00:05:16.806371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:16.806380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:05:16.806390 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:05:16.806398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:16.806420 systemd-journald[1185]: Collecting audit messages is disabled. Sep 13 00:05:16.806438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:16.806447 systemd-journald[1185]: Journal started Sep 13 00:05:16.806464 systemd-journald[1185]: Runtime Journal (/run/log/journal/9d934fa480794eb19836ea95adcc3283) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:05:16.812346 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:16.814586 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:05:16.815281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:05:16.815889 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:05:16.816525 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:05:16.817103 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:05:16.817711 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:05:16.818581 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:05:16.819697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:16.820521 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:05:16.820735 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:05:16.821593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:16.821783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:16.822482 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:16.822615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:16.823467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:16.823645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:16.824358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:05:16.824523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:05:16.825208 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:16.825587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:16.826297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:16.827036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:05:16.827965 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:05:16.837377 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:05:16.843426 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:05:16.847209 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:05:16.847894 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:16.856506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:05:16.861361 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:05:16.861870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:16.869434 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:05:16.870634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:16.875466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:16.879773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:05:16.883568 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:05:16.884240 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:05:16.890465 systemd-journald[1185]: Time spent on flushing to /var/log/journal/9d934fa480794eb19836ea95adcc3283 is 23.793ms for 1124 entries. Sep 13 00:05:16.890465 systemd-journald[1185]: System Journal (/var/log/journal/9d934fa480794eb19836ea95adcc3283) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:05:16.926592 systemd-journald[1185]: Received client request to flush runtime journal. Sep 13 00:05:16.898863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:05:16.900208 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:05:16.909027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:16.912268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:16.920460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:05:16.928001 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:05:16.932316 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 13 00:05:16.932350 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 13 00:05:16.935062 udevadm[1232]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:05:16.938026 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:16.945493 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:05:16.967812 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:05:16.973511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:16.983030 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 13 00:05:16.983287 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 13 00:05:16.987384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:17.290774 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:05:17.296460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:17.316792 systemd-udevd[1248]: Using default interface naming scheme 'v255'. Sep 13 00:05:17.339109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:17.350738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:17.364279 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:05:17.407869 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:05:17.415390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:05:17.451355 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:05:17.471353 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1250) Sep 13 00:05:17.473351 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:05:17.482296 systemd-networkd[1257]: lo: Link UP Sep 13 00:05:17.482604 systemd-networkd[1257]: lo: Gained carrier Sep 13 00:05:17.484706 systemd-networkd[1257]: Enumeration completed Sep 13 00:05:17.487101 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:05:17.485433 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:17.487261 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:17.487316 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:17.489928 systemd-networkd[1257]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:17.489977 systemd-networkd[1257]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:17.492926 systemd-networkd[1257]: eth0: Link UP Sep 13 00:05:17.493015 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:05:17.495297 systemd-networkd[1257]: eth0: Gained carrier Sep 13 00:05:17.495366 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:17.498552 systemd-networkd[1257]: eth1: Link UP Sep 13 00:05:17.500501 systemd-networkd[1257]: eth1: Gained carrier Sep 13 00:05:17.500516 systemd-networkd[1257]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:17.505855 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:17.514901 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:05:17.514982 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Sep 13 00:05:17.522420 systemd-networkd[1257]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:05:17.536235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:17.536738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:17.543145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:17.552418 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:05:17.554222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:17.558439 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:17.558971 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:17.559011 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:17.559047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:17.559307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:17.560678 systemd-networkd[1257]: eth0: DHCPv4 address 37.27.206.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:05:17.561032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:17.572107 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:17.572250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:17.572940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:17.576951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:17.577100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:17.580805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:17.582370 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:05:17.588879 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:05:17.589062 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:05:17.589164 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:05:17.599135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:05:17.605585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:17.608818 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 13 00:05:17.608849 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 13 00:05:17.612553 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:05:17.613875 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:05:17.613921 kernel: [drm] features: -context_init Sep 13 00:05:17.616377 kernel: [drm] number of scanouts: 1 Sep 13 00:05:17.616415 kernel: [drm] number of cap sets: 0 Sep 13 00:05:17.618344 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:05:17.628042 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:05:17.628088 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:05:17.636351 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:05:17.639275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:17.639577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:17.649561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:17.705474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:17.749462 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:05:17.756594 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:05:17.773041 lvm[1316]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:17.807873 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:05:17.808846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:17.816533 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:05:17.821462 lvm[1319]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:17.852722 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:05:17.859141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:05:17.860509 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:05:17.860556 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:17.860662 systemd[1]: Reached target machines.target - Containers. Sep 13 00:05:17.862010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:05:17.868506 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:05:17.870466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:05:17.873896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:17.879570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:05:17.884852 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:05:17.895798 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:05:17.898595 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:05:17.908423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:05:17.922177 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:05:17.923760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:05:17.927699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:05:17.937357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:05:17.958391 kernel: loop1: detected capacity change from 0 to 142488 Sep 13 00:05:17.999361 kernel: loop2: detected capacity change from 0 to 140768 Sep 13 00:05:18.038380 kernel: loop3: detected capacity change from 0 to 221472 Sep 13 00:05:18.086724 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:05:18.090625 kernel: loop5: detected capacity change from 0 to 142488 Sep 13 00:05:18.117354 kernel: loop6: detected capacity change from 0 to 140768 Sep 13 00:05:18.138362 kernel: loop7: detected capacity change from 0 to 221472 Sep 13 00:05:18.156960 (sd-merge)[1340]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:05:18.157524 (sd-merge)[1340]: Merged extensions into '/usr'. Sep 13 00:05:18.164867 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:05:18.165017 systemd[1]: Reloading... Sep 13 00:05:18.229373 zram_generator::config[1371]: No configuration found. Sep 13 00:05:18.303715 ldconfig[1323]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:05:18.347151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:18.397449 systemd[1]: Reloading finished in 231 ms. Sep 13 00:05:18.412362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:05:18.416952 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:05:18.425491 systemd[1]: Starting ensure-sysext.service... Sep 13 00:05:18.430708 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:18.440467 systemd[1]: Reloading requested from client PID 1418 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:05:18.440495 systemd[1]: Reloading... Sep 13 00:05:18.455224 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:05:18.455917 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:05:18.456778 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:05:18.457588 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Sep 13 00:05:18.457716 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Sep 13 00:05:18.461889 systemd-tmpfiles[1419]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:18.462000 systemd-tmpfiles[1419]: Skipping /boot Sep 13 00:05:18.471178 systemd-tmpfiles[1419]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:18.471375 systemd-tmpfiles[1419]: Skipping /boot Sep 13 00:05:18.503727 zram_generator::config[1454]: No configuration found. Sep 13 00:05:18.618608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:18.676650 systemd[1]: Reloading finished in 235 ms. Sep 13 00:05:18.692460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:18.710522 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:18.730780 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:05:18.733976 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:05:18.748431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:18.755464 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:05:18.763149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:18.763307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:18.767725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:18.778483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:18.785632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:18.786132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:18.786231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:18.793840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:18.793983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:18.810229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:18.810841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:18.811691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:18.819226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:18.819395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:18.820238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:18.823587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:18.832024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:18.837345 systemd-resolved[1508]: Positive Trust Anchors: Sep 13 00:05:18.837358 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:18.837383 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:18.840167 augenrules[1534]: No rules Sep 13 00:05:18.844616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:18.850984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:18.851704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:18.851813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:18.852824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:18.855959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:05:18.858343 systemd-resolved[1508]: Using system hostname 'ci-4081-3-5-n-bd9936ab3a'. Sep 13 00:05:18.859835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:05:18.865609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:18.869040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:18.869235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:18.874528 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:18.874716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:18.877501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:18.877770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:18.885611 systemd[1]: Finished ensure-sysext.service. Sep 13 00:05:18.893472 systemd[1]: Reached target network.target - Network. Sep 13 00:05:18.895025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:18.895684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:18.895771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:18.902478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:05:18.904309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:05:18.918481 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:05:18.946698 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:05:18.947296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:05:18.951822 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:05:18.952904 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:18.953314 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:05:18.953653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:05:18.953950 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:05:18.954239 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:05:18.954256 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:18.957285 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:05:18.958101 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:05:18.958541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:05:18.958900 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:18.964996 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:05:18.971557 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:05:18.977292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:05:18.978340 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:05:18.978742 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:18.979059 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:18.981810 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:05:18.981846 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:18.981866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:18.983474 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:05:18.987139 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:05:18.990988 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:05:18.994691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:05:19.004476 jq[1567]: false Sep 13 00:05:19.005459 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:05:19.006059 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:05:19.008591 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:05:19.019401 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:05:19.024586 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:05:19.029225 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:05:19.038745 coreos-metadata[1564]: Sep 13 00:05:19.038 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:05:19.045446 coreos-metadata[1564]: Sep 13 00:05:19.040 INFO Fetch successful Sep 13 00:05:19.045446 coreos-metadata[1564]: Sep 13 00:05:19.040 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:05:19.045446 coreos-metadata[1564]: Sep 13 00:05:19.040 INFO Fetch successful Sep 13 00:05:19.043272 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:05:19.050352 extend-filesystems[1570]: Found loop4 Sep 13 00:05:19.051111 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:05:19.055738 extend-filesystems[1570]: Found loop5 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found loop6 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found loop7 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda1 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda2 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda3 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found usr Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda4 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda6 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda7 Sep 13 00:05:19.055738 extend-filesystems[1570]: Found sda9 Sep 13 00:05:19.055738 extend-filesystems[1570]: Checking size of /dev/sda9 Sep 13 00:05:19.051988 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:05:19.069461 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:05:19.086397 systemd-networkd[1257]: eth1: Gained IPv6LL Sep 13 00:05:19.100723 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:05:19.103039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:05:19.106776 dbus-daemon[1566]: [system] SELinux support is enabled Sep 13 00:05:19.110901 update_engine[1589]: I20250913 00:05:19.110644 1589 main.cc:92] Flatcar Update Engine starting Sep 13 00:05:19.115067 extend-filesystems[1570]: Resized partition /dev/sda9 Sep 13 00:05:19.122276 update_engine[1589]: I20250913 00:05:19.122201 1589 update_check_scheduler.cc:74] Next update check in 9m25s Sep 13 00:05:19.122819 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:05:19.130336 extend-filesystems[1602]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:05:19.131584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:05:19.131774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:05:19.131969 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:05:19.132132 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:05:19.135929 jq[1599]: true Sep 13 00:05:19.139676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:05:19.139864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:05:19.146187 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:05:19.155344 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:05:19.166995 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:05:19.184311 jq[1611]: true Sep 13 00:05:19.186717 (ntainerd)[1617]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:05:19.193666 systemd-timesyncd[1554]: Contacted time server 85.220.190.246:123 (0.flatcar.pool.ntp.org). Sep 13 00:05:19.193915 systemd-timesyncd[1554]: Initial clock synchronization to Sat 2025-09-13 00:05:19.296049 UTC. Sep 13 00:05:19.205620 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:05:19.209262 tar[1604]: linux-amd64/helm Sep 13 00:05:19.219021 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1263) Sep 13 00:05:19.216468 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:05:19.223065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:19.241509 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:05:19.249420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:05:19.249458 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:05:19.249856 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:05:19.249868 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:05:19.254542 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:05:19.254749 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:05:19.280624 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:05:19.298453 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:05:19.299053 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:05:19.302482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:05:19.305185 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:05:19.306964 systemd-logind[1583]: New seat seat0. Sep 13 00:05:19.308410 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:05:19.308429 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:05:19.316924 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:05:19.328239 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:05:19.337461 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:05:19.345927 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:05:19.359674 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:05:19.364499 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:05:19.368226 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:05:19.387979 locksmithd[1672]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:05:19.401629 systemd-networkd[1257]: eth0: Gained IPv6LL Sep 13 00:05:19.407947 bash[1668]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:19.409928 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:05:19.424669 systemd[1]: Starting sshkeys.service... Sep 13 00:05:19.442176 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:05:19.449892 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:05:19.492293 coreos-metadata[1694]: Sep 13 00:05:19.492 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:05:19.493353 coreos-metadata[1694]: Sep 13 00:05:19.493 INFO Fetch successful Sep 13 00:05:19.503787 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:05:19.523574 unknown[1694]: wrote ssh authorized keys file for user: core Sep 13 00:05:19.525792 extend-filesystems[1602]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:05:19.525792 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:05:19.525792 extend-filesystems[1602]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:05:19.533788 extend-filesystems[1570]: Resized filesystem in /dev/sda9 Sep 13 00:05:19.533788 extend-filesystems[1570]: Found sr0 Sep 13 00:05:19.528068 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:05:19.528276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:05:19.551364 containerd[1617]: time="2025-09-13T00:05:19.551290828Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:05:19.560353 update-ssh-keys[1703]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:19.560671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:05:19.566819 systemd[1]: Finished sshkeys.service. Sep 13 00:05:19.593049 containerd[1617]: time="2025-09-13T00:05:19.590919081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.593313 containerd[1617]: time="2025-09-13T00:05:19.593290519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:19.593377 containerd[1617]: time="2025-09-13T00:05:19.593365840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:05:19.593436 containerd[1617]: time="2025-09-13T00:05:19.593424891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:05:19.593597 containerd[1617]: time="2025-09-13T00:05:19.593582096Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:05:19.593647 containerd[1617]: time="2025-09-13T00:05:19.593637129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.593778 containerd[1617]: time="2025-09-13T00:05:19.593761643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:19.593827 containerd[1617]: time="2025-09-13T00:05:19.593817468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.594775 containerd[1617]: time="2025-09-13T00:05:19.594758152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:19.594824 containerd[1617]: time="2025-09-13T00:05:19.594814267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.594866 containerd[1617]: time="2025-09-13T00:05:19.594856356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:19.594901 containerd[1617]: time="2025-09-13T00:05:19.594892965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595013481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595180313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595288035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595299476Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595391689Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:05:19.595452 containerd[1617]: time="2025-09-13T00:05:19.595428749Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:05:19.603875 containerd[1617]: time="2025-09-13T00:05:19.603857598Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:05:19.603970 containerd[1617]: time="2025-09-13T00:05:19.603946946Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:05:19.604059 containerd[1617]: time="2025-09-13T00:05:19.604048345Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:05:19.604148 containerd[1617]: time="2025-09-13T00:05:19.604136801Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:05:19.604213 containerd[1617]: time="2025-09-13T00:05:19.604187126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:05:19.604379 containerd[1617]: time="2025-09-13T00:05:19.604365760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605587682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605694022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605707998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605718277Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605728767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605754996Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605765496Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605776076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605786926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605797226Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605806623Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.605831 containerd[1617]: time="2025-09-13T00:05:19.605814788Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:05:19.606102 containerd[1617]: time="2025-09-13T00:05:19.606038698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606102 containerd[1617]: time="2025-09-13T00:05:19.606058365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606102 containerd[1617]: time="2025-09-13T00:05:19.606068754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606102 containerd[1617]: time="2025-09-13T00:05:19.606078412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606206 containerd[1617]: time="2025-09-13T00:05:19.606194220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606263 containerd[1617]: time="2025-09-13T00:05:19.606243503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606337 containerd[1617]: time="2025-09-13T00:05:19.606308535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606390 containerd[1617]: time="2025-09-13T00:05:19.606379888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606451 containerd[1617]: time="2025-09-13T00:05:19.606427488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.606518 containerd[1617]: time="2025-09-13T00:05:19.606505734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607277 containerd[1617]: time="2025-09-13T00:05:19.607264658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607350 containerd[1617]: time="2025-09-13T00:05:19.607339087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607427 containerd[1617]: time="2025-09-13T00:05:19.607398900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607503 containerd[1617]: time="2025-09-13T00:05:19.607465785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607487395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607545895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607566393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607637777Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607652585Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607660600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607669888Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607678434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607747102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607762101Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:05:19.607826 containerd[1617]: time="2025-09-13T00:05:19.607770456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:05:19.608274 containerd[1617]: time="2025-09-13T00:05:19.608218567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:05:19.609105 containerd[1617]: time="2025-09-13T00:05:19.609092335Z" level=info msg="Connect containerd service" Sep 13 00:05:19.609191 containerd[1617]: time="2025-09-13T00:05:19.609179519Z" level=info msg="using legacy CRI server" Sep 13 00:05:19.609254 containerd[1617]: time="2025-09-13T00:05:19.609244441Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:05:19.609386 containerd[1617]: time="2025-09-13T00:05:19.609375327Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:05:19.609967 containerd[1617]: time="2025-09-13T00:05:19.609949884Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:05:19.611189 containerd[1617]: time="2025-09-13T00:05:19.611162909Z" level=info msg="Start subscribing containerd event" Sep 13 00:05:19.611250 containerd[1617]: time="2025-09-13T00:05:19.611240675Z" level=info msg="Start recovering state" Sep 13 00:05:19.611353 containerd[1617]: time="2025-09-13T00:05:19.611341464Z" level=info msg="Start event monitor" Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.611394623Z" level=info msg="Start snapshots syncer" Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.611418078Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.611425120Z" level=info msg="Start streaming server" Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.611885114Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.611923525Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:05:19.613629 containerd[1617]: time="2025-09-13T00:05:19.613359158Z" level=info msg="containerd successfully booted in 0.062686s" Sep 13 00:05:19.612108 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:05:19.830826 tar[1604]: linux-amd64/LICENSE Sep 13 00:05:19.830826 tar[1604]: linux-amd64/README.md Sep 13 00:05:19.845177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:05:20.381522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:20.383866 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:05:20.394046 systemd[1]: Startup finished in 8.551s (kernel) + 4.399s (userspace) = 12.951s. Sep 13 00:05:20.394584 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:21.066635 kubelet[1726]: E0913 00:05:21.066567 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:21.068961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:21.069157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:23.644270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:05:23.654072 systemd[1]: Started sshd@0-37.27.206.127:22-147.75.109.163:41278.service - OpenSSH per-connection server daemon (147.75.109.163:41278). Sep 13 00:05:24.640106 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 41278 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:24.640808 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:24.648348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:05:24.656511 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:05:24.660491 systemd-logind[1583]: New session 1 of user core. Sep 13 00:05:24.668027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:05:24.673777 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:05:24.677346 (systemd)[1744]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:24.771549 systemd[1744]: Queued start job for default target default.target. Sep 13 00:05:24.771888 systemd[1744]: Created slice app.slice - User Application Slice. Sep 13 00:05:24.771910 systemd[1744]: Reached target paths.target - Paths. Sep 13 00:05:24.771920 systemd[1744]: Reached target timers.target - Timers. Sep 13 00:05:24.776411 systemd[1744]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:05:24.781935 systemd[1744]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:05:24.781986 systemd[1744]: Reached target sockets.target - Sockets. Sep 13 00:05:24.781999 systemd[1744]: Reached target basic.target - Basic System. Sep 13 00:05:24.782038 systemd[1744]: Reached target default.target - Main User Target. Sep 13 00:05:24.782066 systemd[1744]: Startup finished in 99ms. Sep 13 00:05:24.782419 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:05:24.783641 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:05:25.474540 systemd[1]: Started sshd@1-37.27.206.127:22-147.75.109.163:41286.service - OpenSSH per-connection server daemon (147.75.109.163:41286). Sep 13 00:05:26.441361 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 41286 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:26.443132 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:26.448683 systemd-logind[1583]: New session 2 of user core. Sep 13 00:05:26.461775 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:05:27.118946 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:27.122494 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:05:27.123584 systemd[1]: sshd@1-37.27.206.127:22-147.75.109.163:41286.service: Deactivated successfully. Sep 13 00:05:27.126647 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:05:27.127749 systemd-logind[1583]: Removed session 2. Sep 13 00:05:27.283628 systemd[1]: Started sshd@2-37.27.206.127:22-147.75.109.163:41294.service - OpenSSH per-connection server daemon (147.75.109.163:41294). Sep 13 00:05:28.255097 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 41294 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:28.256253 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:28.260277 systemd-logind[1583]: New session 3 of user core. Sep 13 00:05:28.269591 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:05:28.929147 sshd[1764]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.931891 systemd[1]: sshd@2-37.27.206.127:22-147.75.109.163:41294.service: Deactivated successfully. Sep 13 00:05:28.935084 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:05:28.935945 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:05:28.937136 systemd-logind[1583]: Removed session 3. Sep 13 00:05:29.126557 systemd[1]: Started sshd@3-37.27.206.127:22-147.75.109.163:41308.service - OpenSSH per-connection server daemon (147.75.109.163:41308). Sep 13 00:05:30.198751 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 41308 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:30.200013 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:30.204416 systemd-logind[1583]: New session 4 of user core. Sep 13 00:05:30.213719 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:05:30.945437 sshd[1772]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:30.950812 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:05:30.951221 systemd[1]: sshd@3-37.27.206.127:22-147.75.109.163:41308.service: Deactivated successfully. Sep 13 00:05:30.956114 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:05:30.957685 systemd-logind[1583]: Removed session 4. Sep 13 00:05:31.087091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:31.092525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:31.094675 systemd[1]: Started sshd@4-37.27.206.127:22-147.75.109.163:41494.service - OpenSSH per-connection server daemon (147.75.109.163:41494). Sep 13 00:05:31.208480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:31.211470 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:31.245386 kubelet[1794]: E0913 00:05:31.245333 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:31.248529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:31.248699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:32.070261 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 41494 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:32.071679 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:32.077452 systemd-logind[1583]: New session 5 of user core. Sep 13 00:05:32.086643 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:05:32.604995 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:05:32.605616 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:32.620421 sudo[1804]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:32.779230 sshd[1781]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:32.784300 systemd[1]: sshd@4-37.27.206.127:22-147.75.109.163:41494.service: Deactivated successfully. Sep 13 00:05:32.790646 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:05:32.791181 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:05:32.794440 systemd-logind[1583]: Removed session 5. Sep 13 00:05:32.978714 systemd[1]: Started sshd@5-37.27.206.127:22-147.75.109.163:41510.service - OpenSSH per-connection server daemon (147.75.109.163:41510). Sep 13 00:05:34.066507 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 41510 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:34.068108 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:34.073537 systemd-logind[1583]: New session 6 of user core. Sep 13 00:05:34.079559 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:05:34.639861 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:05:34.640179 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:34.643845 sudo[1814]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:34.648566 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:05:34.648811 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:34.667590 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:34.669279 auditctl[1817]: No rules Sep 13 00:05:34.670345 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:05:34.670579 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:34.673766 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:34.705964 augenrules[1836]: No rules Sep 13 00:05:34.707235 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:34.709848 sudo[1813]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:34.885728 sshd[1809]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:34.888900 systemd[1]: sshd@5-37.27.206.127:22-147.75.109.163:41510.service: Deactivated successfully. Sep 13 00:05:34.892443 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:05:34.892524 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:05:34.895741 systemd-logind[1583]: Removed session 6. Sep 13 00:05:35.031806 systemd[1]: Started sshd@6-37.27.206.127:22-147.75.109.163:41512.service - OpenSSH per-connection server daemon (147.75.109.163:41512). Sep 13 00:05:35.997360 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 41512 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:35.998660 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:36.003431 systemd-logind[1583]: New session 7 of user core. Sep 13 00:05:36.009626 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:05:36.518263 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:05:36.518828 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:36.958640 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:05:36.968731 (dockerd)[1866]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:05:37.210413 dockerd[1866]: time="2025-09-13T00:05:37.210249993Z" level=info msg="Starting up" Sep 13 00:05:37.295229 systemd[1]: var-lib-docker-metacopy\x2dcheck846538950-merged.mount: Deactivated successfully. Sep 13 00:05:37.313507 dockerd[1866]: time="2025-09-13T00:05:37.313451579Z" level=info msg="Loading containers: start." Sep 13 00:05:37.413362 kernel: Initializing XFRM netlink socket Sep 13 00:05:37.490378 systemd-networkd[1257]: docker0: Link UP Sep 13 00:05:37.504750 dockerd[1866]: time="2025-09-13T00:05:37.504686753Z" level=info msg="Loading containers: done." Sep 13 00:05:37.518656 dockerd[1866]: time="2025-09-13T00:05:37.518611029Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:05:37.518826 dockerd[1866]: time="2025-09-13T00:05:37.518749189Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:05:37.518894 dockerd[1866]: time="2025-09-13T00:05:37.518865220Z" level=info msg="Daemon has completed initialization" Sep 13 00:05:37.547345 dockerd[1866]: time="2025-09-13T00:05:37.546741208Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:05:37.547037 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:05:38.812535 containerd[1617]: time="2025-09-13T00:05:38.812452768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:05:39.395146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544947746.mount: Deactivated successfully. Sep 13 00:05:40.796070 containerd[1617]: time="2025-09-13T00:05:40.796012242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.797006 containerd[1617]: time="2025-09-13T00:05:40.796962826Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117224" Sep 13 00:05:40.798732 containerd[1617]: time="2025-09-13T00:05:40.797759535Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.799914 containerd[1617]: time="2025-09-13T00:05:40.799877469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.800801 containerd[1617]: time="2025-09-13T00:05:40.800765731Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.988239774s" Sep 13 00:05:40.800845 containerd[1617]: time="2025-09-13T00:05:40.800801470Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:05:40.801506 containerd[1617]: time="2025-09-13T00:05:40.801478820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:05:41.499262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:05:41.511731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:41.650267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:41.651866 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:41.697141 kubelet[2073]: E0913 00:05:41.697090 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:41.699300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:41.699483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:42.276754 containerd[1617]: time="2025-09-13T00:05:42.276691109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.278332 containerd[1617]: time="2025-09-13T00:05:42.278265087Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716654" Sep 13 00:05:42.280500 containerd[1617]: time="2025-09-13T00:05:42.279482234Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.282154 containerd[1617]: time="2025-09-13T00:05:42.282130714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.282961 containerd[1617]: time="2025-09-13T00:05:42.282930457Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.481423432s" Sep 13 00:05:42.283002 containerd[1617]: time="2025-09-13T00:05:42.282963831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:05:42.283424 containerd[1617]: time="2025-09-13T00:05:42.283390781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:05:43.885552 containerd[1617]: time="2025-09-13T00:05:43.885493963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.886662 containerd[1617]: time="2025-09-13T00:05:43.886476769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787720" Sep 13 00:05:43.887845 containerd[1617]: time="2025-09-13T00:05:43.887543371Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.890080 containerd[1617]: time="2025-09-13T00:05:43.890043020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.890975 containerd[1617]: time="2025-09-13T00:05:43.890944515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.607515447s" Sep 13 00:05:43.891023 containerd[1617]: time="2025-09-13T00:05:43.890976143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:05:43.892061 containerd[1617]: time="2025-09-13T00:05:43.892024901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:05:44.883678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739099670.mount: Deactivated successfully. Sep 13 00:05:45.203150 containerd[1617]: time="2025-09-13T00:05:45.203012562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.205051 containerd[1617]: time="2025-09-13T00:05:45.204981117Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410280" Sep 13 00:05:45.205950 containerd[1617]: time="2025-09-13T00:05:45.205881885Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.208509 containerd[1617]: time="2025-09-13T00:05:45.208467406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.209275 containerd[1617]: time="2025-09-13T00:05:45.209244025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.317189503s" Sep 13 00:05:45.209275 containerd[1617]: time="2025-09-13T00:05:45.209275159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:05:45.212623 containerd[1617]: time="2025-09-13T00:05:45.210087000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:05:45.757506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810415683.mount: Deactivated successfully. Sep 13 00:05:46.459700 containerd[1617]: time="2025-09-13T00:05:46.459624771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.460735 containerd[1617]: time="2025-09-13T00:05:46.460578027Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 13 00:05:46.461735 containerd[1617]: time="2025-09-13T00:05:46.461369646Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.463625 containerd[1617]: time="2025-09-13T00:05:46.463590583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.465139 containerd[1617]: time="2025-09-13T00:05:46.464870472Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.254733177s" Sep 13 00:05:46.465139 containerd[1617]: time="2025-09-13T00:05:46.464898647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:05:46.465448 containerd[1617]: time="2025-09-13T00:05:46.465426916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:05:46.920614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379419706.mount: Deactivated successfully. Sep 13 00:05:46.928574 containerd[1617]: time="2025-09-13T00:05:46.928516298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.929577 containerd[1617]: time="2025-09-13T00:05:46.929523888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 13 00:05:46.932357 containerd[1617]: time="2025-09-13T00:05:46.930699650Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.935031 containerd[1617]: time="2025-09-13T00:05:46.934995031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.935874 containerd[1617]: time="2025-09-13T00:05:46.935804431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 470.346795ms" Sep 13 00:05:46.936009 containerd[1617]: time="2025-09-13T00:05:46.935884192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:05:46.937254 containerd[1617]: time="2025-09-13T00:05:46.937190400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:05:47.486743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695690222.mount: Deactivated successfully. Sep 13 00:05:49.349375 containerd[1617]: time="2025-09-13T00:05:49.349098757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.350662 containerd[1617]: time="2025-09-13T00:05:49.350455011Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910785" Sep 13 00:05:49.352286 containerd[1617]: time="2025-09-13T00:05:49.351829254Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.354854 containerd[1617]: time="2025-09-13T00:05:49.354828187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.355992 containerd[1617]: time="2025-09-13T00:05:49.355954018Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.41871575s" Sep 13 00:05:49.356047 containerd[1617]: time="2025-09-13T00:05:49.355996659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:05:51.677876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:51.683624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:51.715368 systemd[1]: Reloading requested from client PID 2236 ('systemctl') (unit session-7.scope)... Sep 13 00:05:51.715380 systemd[1]: Reloading... Sep 13 00:05:51.800409 zram_generator::config[2274]: No configuration found. Sep 13 00:05:51.902370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:51.963033 systemd[1]: Reloading finished in 247 ms. Sep 13 00:05:51.998944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:05:51.999077 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:05:51.999548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:52.003846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:52.126500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:52.135705 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:52.174907 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:52.174907 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:52.174907 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:52.175448 kubelet[2339]: I0913 00:05:52.175002 2339 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:52.549142 kubelet[2339]: I0913 00:05:52.548962 2339 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:52.549142 kubelet[2339]: I0913 00:05:52.548994 2339 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:52.550896 kubelet[2339]: I0913 00:05:52.549846 2339 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:52.573777 kubelet[2339]: I0913 00:05:52.573720 2339 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:52.578646 kubelet[2339]: E0913 00:05:52.578597 2339 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.206.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:52.583856 kubelet[2339]: E0913 00:05:52.583797 2339 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:52.583856 kubelet[2339]: I0913 00:05:52.583831 2339 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:52.590038 kubelet[2339]: I0913 00:05:52.589998 2339 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:52.591674 kubelet[2339]: I0913 00:05:52.591638 2339 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:52.591800 kubelet[2339]: I0913 00:05:52.591755 2339 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:52.591944 kubelet[2339]: I0913 00:05:52.591785 2339 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-bd9936ab3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:05:52.591944 kubelet[2339]: I0913 00:05:52.591940 2339 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:52.592049 kubelet[2339]: I0913 00:05:52.591948 2339 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:52.592049 kubelet[2339]: I0913 00:05:52.592043 2339 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:52.596977 kubelet[2339]: I0913 00:05:52.596557 2339 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:52.596977 kubelet[2339]: I0913 00:05:52.596586 2339 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:52.596977 kubelet[2339]: I0913 00:05:52.596616 2339 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:52.596977 kubelet[2339]: I0913 00:05:52.596629 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:52.600745 kubelet[2339]: W0913 00:05:52.600701 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.206.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-bd9936ab3a&limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:52.601096 kubelet[2339]: E0913 00:05:52.600869 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.206.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-bd9936ab3a&limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:52.602841 kubelet[2339]: W0913 00:05:52.602609 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.206.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:52.602841 kubelet[2339]: E0913 00:05:52.602656 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.206.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:52.602841 kubelet[2339]: I0913 00:05:52.602742 2339 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:52.605675 kubelet[2339]: I0913 00:05:52.605567 2339 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:52.608047 kubelet[2339]: W0913 00:05:52.606548 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:05:52.608047 kubelet[2339]: I0913 00:05:52.607437 2339 server.go:1274] "Started kubelet" Sep 13 00:05:52.610357 kubelet[2339]: I0913 00:05:52.609279 2339 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:52.610357 kubelet[2339]: I0913 00:05:52.609846 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:52.611080 kubelet[2339]: I0913 00:05:52.611066 2339 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:52.615062 kubelet[2339]: I0913 00:05:52.614990 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:52.615295 kubelet[2339]: I0913 00:05:52.615281 2339 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:52.617565 kubelet[2339]: I0913 00:05:52.617264 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:52.618674 kubelet[2339]: I0913 00:05:52.618399 2339 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:52.618674 kubelet[2339]: E0913 00:05:52.618547 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:52.618996 kubelet[2339]: I0913 00:05:52.618959 2339 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:52.619053 kubelet[2339]: I0913 00:05:52.619043 2339 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:52.619797 kubelet[2339]: W0913 00:05:52.619744 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.206.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:52.619860 kubelet[2339]: E0913 00:05:52.619807 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.206.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:52.622097 kubelet[2339]: E0913 00:05:52.619874 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.206.127:6443/api/v1/namespaces/default/events\": dial tcp 37.27.206.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-bd9936ab3a.1864aecd9711e744 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-bd9936ab3a,UID:ci-4081-3-5-n-bd9936ab3a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-bd9936ab3a,},FirstTimestamp:2025-09-13 00:05:52.6073977 +0000 UTC m=+0.468048722,LastTimestamp:2025-09-13 00:05:52.6073977 +0000 UTC m=+0.468048722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-bd9936ab3a,}" Sep 13 00:05:52.622201 kubelet[2339]: E0913 00:05:52.622157 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.206.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-bd9936ab3a?timeout=10s\": dial tcp 37.27.206.127:6443: connect: connection refused" interval="200ms" Sep 13 00:05:52.623288 kubelet[2339]: I0913 00:05:52.622338 2339 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:52.623545 kubelet[2339]: I0913 00:05:52.623379 2339 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:52.627403 kubelet[2339]: I0913 00:05:52.626832 2339 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:52.634365 kubelet[2339]: I0913 00:05:52.632472 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:52.636268 kubelet[2339]: I0913 00:05:52.636246 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:52.636408 kubelet[2339]: I0913 00:05:52.636398 2339 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:52.636501 kubelet[2339]: I0913 00:05:52.636492 2339 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:52.636591 kubelet[2339]: E0913 00:05:52.636572 2339 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:52.642893 kubelet[2339]: W0913 00:05:52.642856 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.206.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:52.643018 kubelet[2339]: E0913 00:05:52.642998 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.206.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:52.660558 kubelet[2339]: E0913 00:05:52.660524 2339 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:52.666259 kubelet[2339]: I0913 00:05:52.666228 2339 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:52.666259 kubelet[2339]: I0913 00:05:52.666244 2339 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:52.666259 kubelet[2339]: I0913 00:05:52.666257 2339 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:52.667670 kubelet[2339]: I0913 00:05:52.667648 2339 policy_none.go:49] "None policy: Start" Sep 13 00:05:52.668190 kubelet[2339]: I0913 00:05:52.668170 2339 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:52.668259 kubelet[2339]: I0913 00:05:52.668188 2339 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:52.671695 kubelet[2339]: I0913 00:05:52.671666 2339 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:52.671835 kubelet[2339]: I0913 00:05:52.671812 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:52.671890 kubelet[2339]: I0913 00:05:52.671827 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:52.674171 kubelet[2339]: I0913 00:05:52.674020 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:52.675435 kubelet[2339]: E0913 00:05:52.675410 2339 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:52.774242 kubelet[2339]: I0913 00:05:52.774190 2339 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.774636 kubelet[2339]: E0913 00:05:52.774600 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.206.127:6443/api/v1/nodes\": dial tcp 37.27.206.127:6443: connect: connection refused" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.823481 kubelet[2339]: E0913 00:05:52.823301 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.206.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-bd9936ab3a?timeout=10s\": dial tcp 37.27.206.127:6443: connect: connection refused" interval="400ms" Sep 13 00:05:52.920965 kubelet[2339]: I0913 00:05:52.920910 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.920965 kubelet[2339]: I0913 00:05:52.920964 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.920965 kubelet[2339]: I0913 00:05:52.920988 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921370 kubelet[2339]: I0913 00:05:52.921010 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921370 kubelet[2339]: I0913 00:05:52.921029 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921370 kubelet[2339]: I0913 00:05:52.921045 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921370 kubelet[2339]: I0913 00:05:52.921060 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d970de2b5c35bd70b56303a58ea4dc38-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-bd9936ab3a\" (UID: \"d970de2b5c35bd70b56303a58ea4dc38\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921370 kubelet[2339]: I0913 00:05:52.921117 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.921564 kubelet[2339]: I0913 00:05:52.921140 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.976671 kubelet[2339]: I0913 00:05:52.976632 2339 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:52.976963 kubelet[2339]: E0913 00:05:52.976931 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.206.127:6443/api/v1/nodes\": dial tcp 37.27.206.127:6443: connect: connection refused" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:53.044276 containerd[1617]: time="2025-09-13T00:05:53.044227926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-bd9936ab3a,Uid:f65af354ae7f60e5a243e0999f4f1671,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:53.054339 containerd[1617]: time="2025-09-13T00:05:53.052879833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-bd9936ab3a,Uid:b24674b9d9c952183dfb0dd3ad98ccd6,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:53.058278 containerd[1617]: time="2025-09-13T00:05:53.052883690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-bd9936ab3a,Uid:d970de2b5c35bd70b56303a58ea4dc38,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:53.225279 kubelet[2339]: E0913 00:05:53.225042 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.206.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-bd9936ab3a?timeout=10s\": dial tcp 37.27.206.127:6443: connect: connection refused" interval="800ms" Sep 13 00:05:53.379875 kubelet[2339]: I0913 00:05:53.379821 2339 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:53.380385 kubelet[2339]: E0913 00:05:53.380289 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.206.127:6443/api/v1/nodes\": dial tcp 37.27.206.127:6443: connect: connection refused" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:53.491617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321381758.mount: Deactivated successfully. Sep 13 00:05:53.501081 containerd[1617]: time="2025-09-13T00:05:53.501007238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:53.502409 containerd[1617]: time="2025-09-13T00:05:53.502316230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:53.503416 containerd[1617]: time="2025-09-13T00:05:53.503353088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:53.504077 containerd[1617]: time="2025-09-13T00:05:53.504013551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:53.504576 containerd[1617]: time="2025-09-13T00:05:53.504536418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:53.505828 containerd[1617]: time="2025-09-13T00:05:53.505782502Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:53.506433 containerd[1617]: time="2025-09-13T00:05:53.506374923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 13 00:05:53.510780 containerd[1617]: time="2025-09-13T00:05:53.510746125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:53.512400 containerd[1617]: time="2025-09-13T00:05:53.511870113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 457.376765ms" Sep 13 00:05:53.513987 containerd[1617]: time="2025-09-13T00:05:53.513956904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.647999ms" Sep 13 00:05:53.516938 containerd[1617]: time="2025-09-13T00:05:53.516889463Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.607084ms" Sep 13 00:05:53.682956 containerd[1617]: time="2025-09-13T00:05:53.679608209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:53.682956 containerd[1617]: time="2025-09-13T00:05:53.679693506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:53.682956 containerd[1617]: time="2025-09-13T00:05:53.679722366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.682956 containerd[1617]: time="2025-09-13T00:05:53.679797332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.685704 containerd[1617]: time="2025-09-13T00:05:53.685402982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:53.685704 containerd[1617]: time="2025-09-13T00:05:53.685462245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:53.685704 containerd[1617]: time="2025-09-13T00:05:53.685489753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.685704 containerd[1617]: time="2025-09-13T00:05:53.685576624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.687397 containerd[1617]: time="2025-09-13T00:05:53.687187073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:53.687397 containerd[1617]: time="2025-09-13T00:05:53.687305159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:53.687397 containerd[1617]: time="2025-09-13T00:05:53.687375404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.688164 containerd[1617]: time="2025-09-13T00:05:53.688120865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.750120 containerd[1617]: time="2025-09-13T00:05:53.749948047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-bd9936ab3a,Uid:f65af354ae7f60e5a243e0999f4f1671,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57bd94fe1d91c9e083941c7d85a4d716d5104166ab65aa7abd476802011328c\"" Sep 13 00:05:53.762340 containerd[1617]: time="2025-09-13T00:05:53.762240724Z" level=info msg="CreateContainer within sandbox \"c57bd94fe1d91c9e083941c7d85a4d716d5104166ab65aa7abd476802011328c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:05:53.776898 containerd[1617]: time="2025-09-13T00:05:53.775681813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-bd9936ab3a,Uid:b24674b9d9c952183dfb0dd3ad98ccd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cff485006ec7b20e041ae65bb72d4d7fe9c2c1613221dbebfbd58e05a9f0e87\"" Sep 13 00:05:53.781552 containerd[1617]: time="2025-09-13T00:05:53.781527502Z" level=info msg="CreateContainer within sandbox \"9cff485006ec7b20e041ae65bb72d4d7fe9c2c1613221dbebfbd58e05a9f0e87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:05:53.786235 containerd[1617]: time="2025-09-13T00:05:53.786208698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-bd9936ab3a,Uid:d970de2b5c35bd70b56303a58ea4dc38,Namespace:kube-system,Attempt:0,} returns sandbox id \"1921175748620eeb6bd961adc50611a469f41ac9b596af49d3151f809b9ef8ec\"" Sep 13 00:05:53.788757 containerd[1617]: time="2025-09-13T00:05:53.788717217Z" level=info msg="CreateContainer within sandbox \"1921175748620eeb6bd961adc50611a469f41ac9b596af49d3151f809b9ef8ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:05:53.792385 containerd[1617]: time="2025-09-13T00:05:53.792255697Z" level=info msg="CreateContainer within sandbox \"c57bd94fe1d91c9e083941c7d85a4d716d5104166ab65aa7abd476802011328c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ad607286eb3bc0e227e6f6a3e714af8dd64cda505b09fcfe04f7786f0a73651\"" Sep 13 00:05:53.792988 containerd[1617]: time="2025-09-13T00:05:53.792946543Z" level=info msg="StartContainer for \"3ad607286eb3bc0e227e6f6a3e714af8dd64cda505b09fcfe04f7786f0a73651\"" Sep 13 00:05:53.818025 containerd[1617]: time="2025-09-13T00:05:53.817897571Z" level=info msg="CreateContainer within sandbox \"9cff485006ec7b20e041ae65bb72d4d7fe9c2c1613221dbebfbd58e05a9f0e87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec\"" Sep 13 00:05:53.819156 containerd[1617]: time="2025-09-13T00:05:53.818630646Z" level=info msg="StartContainer for \"0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec\"" Sep 13 00:05:53.835220 containerd[1617]: time="2025-09-13T00:05:53.835169377Z" level=info msg="CreateContainer within sandbox \"1921175748620eeb6bd961adc50611a469f41ac9b596af49d3151f809b9ef8ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bcdcd8164e4a51ac2ff1fd93ea317129454ee22cba0eb149987242bdcca2f32\"" Sep 13 00:05:53.837263 containerd[1617]: time="2025-09-13T00:05:53.836419968Z" level=info msg="StartContainer for \"1bcdcd8164e4a51ac2ff1fd93ea317129454ee22cba0eb149987242bdcca2f32\"" Sep 13 00:05:53.879296 containerd[1617]: time="2025-09-13T00:05:53.878786792Z" level=info msg="StartContainer for \"3ad607286eb3bc0e227e6f6a3e714af8dd64cda505b09fcfe04f7786f0a73651\" returns successfully" Sep 13 00:05:53.909858 containerd[1617]: time="2025-09-13T00:05:53.909670921Z" level=info msg="StartContainer for \"1bcdcd8164e4a51ac2ff1fd93ea317129454ee22cba0eb149987242bdcca2f32\" returns successfully" Sep 13 00:05:53.911022 containerd[1617]: time="2025-09-13T00:05:53.909741597Z" level=info msg="StartContainer for \"0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec\" returns successfully" Sep 13 00:05:53.931883 kubelet[2339]: W0913 00:05:53.931686 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.206.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-bd9936ab3a&limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:53.931883 kubelet[2339]: E0913 00:05:53.931769 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.206.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-bd9936ab3a&limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:53.939492 kubelet[2339]: W0913 00:05:53.938634 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.206.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:53.939492 kubelet[2339]: E0913 00:05:53.938685 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.206.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:54.017730 kubelet[2339]: W0913 00:05:54.017587 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.206.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:54.018268 kubelet[2339]: E0913 00:05:54.018246 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.206.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:54.022976 kubelet[2339]: W0913 00:05:54.022925 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.206.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.206.127:6443: connect: connection refused Sep 13 00:05:54.022976 kubelet[2339]: E0913 00:05:54.022962 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.206.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.206.127:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:54.026362 kubelet[2339]: E0913 00:05:54.025354 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.206.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-bd9936ab3a?timeout=10s\": dial tcp 37.27.206.127:6443: connect: connection refused" interval="1.6s" Sep 13 00:05:54.056949 kubelet[2339]: E0913 00:05:54.055823 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.206.127:6443/api/v1/namespaces/default/events\": dial tcp 37.27.206.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-bd9936ab3a.1864aecd9711e744 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-bd9936ab3a,UID:ci-4081-3-5-n-bd9936ab3a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-bd9936ab3a,},FirstTimestamp:2025-09-13 00:05:52.6073977 +0000 UTC m=+0.468048722,LastTimestamp:2025-09-13 00:05:52.6073977 +0000 UTC m=+0.468048722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-bd9936ab3a,}" Sep 13 00:05:54.183843 kubelet[2339]: I0913 00:05:54.183486 2339 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:55.525453 kubelet[2339]: I0913 00:05:55.525346 2339 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:55.525453 kubelet[2339]: E0913 00:05:55.525384 2339 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-bd9936ab3a\": node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:55.540300 kubelet[2339]: E0913 00:05:55.540115 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:55.641015 kubelet[2339]: E0913 00:05:55.640961 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:55.742093 kubelet[2339]: E0913 00:05:55.742039 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:55.842785 kubelet[2339]: E0913 00:05:55.842664 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:56.297873 kubelet[2339]: E0913 00:05:56.297825 2339 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:56.602636 kubelet[2339]: I0913 00:05:56.602592 2339 apiserver.go:52] "Watching apiserver" Sep 13 00:05:56.619148 kubelet[2339]: I0913 00:05:56.619118 2339 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:57.584764 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-7.scope)... Sep 13 00:05:57.584791 systemd[1]: Reloading... Sep 13 00:05:57.676619 zram_generator::config[2654]: No configuration found. Sep 13 00:05:57.771678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:57.834461 systemd[1]: Reloading finished in 249 ms. Sep 13 00:05:57.861446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:57.880222 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:05:57.880535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:57.885801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:57.972476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:57.977116 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:58.039073 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:58.039073 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:58.039073 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:58.039535 kubelet[2719]: I0913 00:05:58.039168 2719 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:58.044933 kubelet[2719]: I0913 00:05:58.044903 2719 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:58.044933 kubelet[2719]: I0913 00:05:58.044923 2719 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:58.045178 kubelet[2719]: I0913 00:05:58.045153 2719 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:58.046191 kubelet[2719]: I0913 00:05:58.046166 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:05:58.049344 kubelet[2719]: I0913 00:05:58.049069 2719 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:58.053767 kubelet[2719]: E0913 00:05:58.053720 2719 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:58.053767 kubelet[2719]: I0913 00:05:58.053757 2719 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:58.059363 kubelet[2719]: I0913 00:05:58.057475 2719 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:58.059363 kubelet[2719]: I0913 00:05:58.057899 2719 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:58.059363 kubelet[2719]: I0913 00:05:58.057998 2719 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:58.059363 kubelet[2719]: I0913 00:05:58.058030 2719 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-bd9936ab3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058371 2719 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058383 2719 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058423 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058545 2719 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058564 2719 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058599 2719 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:58.059666 kubelet[2719]: I0913 00:05:58.058612 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:58.063515 kubelet[2719]: I0913 00:05:58.063501 2719 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:58.064174 kubelet[2719]: I0913 00:05:58.064146 2719 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:58.064754 kubelet[2719]: I0913 00:05:58.064733 2719 server.go:1274] "Started kubelet" Sep 13 00:05:58.068705 kubelet[2719]: I0913 00:05:58.068693 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:58.079049 kubelet[2719]: I0913 00:05:58.079021 2719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:58.080040 kubelet[2719]: I0913 00:05:58.080023 2719 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:58.080984 kubelet[2719]: I0913 00:05:58.080956 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:58.081169 kubelet[2719]: I0913 00:05:58.081151 2719 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:58.081392 kubelet[2719]: I0913 00:05:58.081373 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:58.084126 kubelet[2719]: I0913 00:05:58.084109 2719 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:58.084382 kubelet[2719]: E0913 00:05:58.084363 2719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-bd9936ab3a\" not found" Sep 13 00:05:58.088505 kubelet[2719]: I0913 00:05:58.088491 2719 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:58.089507 kubelet[2719]: I0913 00:05:58.089494 2719 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:58.091359 kubelet[2719]: I0913 00:05:58.091317 2719 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:58.091531 kubelet[2719]: I0913 00:05:58.091513 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:58.098101 kubelet[2719]: I0913 00:05:58.098065 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:58.098349 kubelet[2719]: E0913 00:05:58.098284 2719 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:58.098981 kubelet[2719]: I0913 00:05:58.098963 2719 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:58.099543 kubelet[2719]: I0913 00:05:58.099510 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:58.099543 kubelet[2719]: I0913 00:05:58.099532 2719 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:58.099662 kubelet[2719]: I0913 00:05:58.099551 2719 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:58.099662 kubelet[2719]: E0913 00:05:58.099593 2719 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:58.145080 kubelet[2719]: I0913 00:05:58.144864 2719 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:58.145080 kubelet[2719]: I0913 00:05:58.144882 2719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:58.145080 kubelet[2719]: I0913 00:05:58.144897 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:58.145509 kubelet[2719]: I0913 00:05:58.145424 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:05:58.145509 kubelet[2719]: I0913 00:05:58.145438 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:05:58.145509 kubelet[2719]: I0913 00:05:58.145454 2719 policy_none.go:49] "None policy: Start" Sep 13 00:05:58.146157 kubelet[2719]: I0913 00:05:58.146135 2719 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:58.146355 kubelet[2719]: I0913 00:05:58.146246 2719 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:58.146521 kubelet[2719]: I0913 00:05:58.146511 2719 state_mem.go:75] "Updated machine memory state" Sep 13 00:05:58.148154 kubelet[2719]: I0913 00:05:58.147364 2719 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:58.148154 kubelet[2719]: I0913 00:05:58.147502 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:58.148154 kubelet[2719]: I0913 00:05:58.147511 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:58.148154 kubelet[2719]: I0913 00:05:58.148088 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:58.252064 kubelet[2719]: I0913 00:05:58.252016 2719 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.263111 kubelet[2719]: I0913 00:05:58.261975 2719 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.263111 kubelet[2719]: I0913 00:05:58.262067 2719 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290282 kubelet[2719]: I0913 00:05:58.290217 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290282 kubelet[2719]: I0913 00:05:58.290267 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290282 kubelet[2719]: I0913 00:05:58.290288 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290580 kubelet[2719]: I0913 00:05:58.290307 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290580 kubelet[2719]: I0913 00:05:58.290350 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d970de2b5c35bd70b56303a58ea4dc38-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-bd9936ab3a\" (UID: \"d970de2b5c35bd70b56303a58ea4dc38\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290580 kubelet[2719]: I0913 00:05:58.290366 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f65af354ae7f60e5a243e0999f4f1671-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-bd9936ab3a\" (UID: \"f65af354ae7f60e5a243e0999f4f1671\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290580 kubelet[2719]: I0913 00:05:58.290382 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290580 kubelet[2719]: I0913 00:05:58.290397 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.290761 kubelet[2719]: I0913 00:05:58.290416 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b24674b9d9c952183dfb0dd3ad98ccd6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-bd9936ab3a\" (UID: \"b24674b9d9c952183dfb0dd3ad98ccd6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" Sep 13 00:05:58.597515 sudo[2750]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:05:58.598210 sudo[2750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:05:59.062693 kubelet[2719]: I0913 00:05:59.062582 2719 apiserver.go:52] "Watching apiserver" Sep 13 00:05:59.089659 kubelet[2719]: I0913 00:05:59.089608 2719 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:59.092297 sudo[2750]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:59.162848 kubelet[2719]: I0913 00:05:59.162703 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-bd9936ab3a" podStartSLOduration=1.162686241 podStartE2EDuration="1.162686241s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.152045314 +0000 UTC m=+1.168851195" watchObservedRunningTime="2025-09-13 00:05:59.162686241 +0000 UTC m=+1.179492132" Sep 13 00:05:59.174968 kubelet[2719]: I0913 00:05:59.174433 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-bd9936ab3a" podStartSLOduration=1.174417842 podStartE2EDuration="1.174417842s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.163160339 +0000 UTC m=+1.179966220" watchObservedRunningTime="2025-09-13 00:05:59.174417842 +0000 UTC m=+1.191223724" Sep 13 00:05:59.186398 kubelet[2719]: I0913 00:05:59.185251 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-bd9936ab3a" podStartSLOduration=1.185239695 podStartE2EDuration="1.185239695s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.1745698 +0000 UTC m=+1.191375681" watchObservedRunningTime="2025-09-13 00:05:59.185239695 +0000 UTC m=+1.202045577" Sep 13 00:06:00.622115 sudo[1849]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:00.781042 sshd[1845]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:00.785410 systemd[1]: sshd@6-37.27.206.127:22-147.75.109.163:41512.service: Deactivated successfully. Sep 13 00:06:00.789045 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:00.790090 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:00.793601 systemd-logind[1583]: Removed session 7. Sep 13 00:06:02.005843 kubelet[2719]: I0913 00:06:02.005795 2719 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:02.006308 containerd[1617]: time="2025-09-13T00:06:02.006142036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:02.006609 kubelet[2719]: I0913 00:06:02.006438 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:02.117418 kubelet[2719]: I0913 00:06:02.115073 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-clustermesh-secrets\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117418 kubelet[2719]: I0913 00:06:02.115108 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-kernel\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117418 kubelet[2719]: I0913 00:06:02.115125 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-proxy\") pod \"kube-proxy-f9sr7\" (UID: \"7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922\") " pod="kube-system/kube-proxy-f9sr7" Sep 13 00:06:02.117418 kubelet[2719]: I0913 00:06:02.115140 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzx6\" (UniqueName: \"kubernetes.io/projected/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-api-access-mqzx6\") pod \"kube-proxy-f9sr7\" (UID: \"7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922\") " pod="kube-system/kube-proxy-f9sr7" Sep 13 00:06:02.117418 kubelet[2719]: I0913 00:06:02.115156 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hostproc\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115169 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfj6h\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115183 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hubble-tls\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115199 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-lib-modules\") pod \"kube-proxy-f9sr7\" (UID: \"7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922\") " pod="kube-system/kube-proxy-f9sr7" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115220 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-config-path\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115235 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-run\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117623 kubelet[2719]: I0913 00:06:02.115249 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-xtables-lock\") pod \"kube-proxy-f9sr7\" (UID: \"7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922\") " pod="kube-system/kube-proxy-f9sr7" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115261 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-cgroup\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115274 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cni-path\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115292 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-lib-modules\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115310 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-bpf-maps\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115344 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-xtables-lock\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117746 kubelet[2719]: I0913 00:06:02.115361 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-etc-cni-netd\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.117868 kubelet[2719]: I0913 00:06:02.115376 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-net\") pod \"cilium-2cwzm\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " pod="kube-system/cilium-2cwzm" Sep 13 00:06:02.231355 kubelet[2719]: E0913 00:06:02.229955 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.231355 kubelet[2719]: E0913 00:06:02.229991 2719 projected.go:194] Error preparing data for projected volume kube-api-access-rfj6h for pod kube-system/cilium-2cwzm: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.234347 kubelet[2719]: E0913 00:06:02.233373 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h podName:90f480a3-a7fa-441c-bdb4-90ae331c9ea7 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:02.731574006 +0000 UTC m=+4.748379886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rfj6h" (UniqueName: "kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h") pod "cilium-2cwzm" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7") : configmap "kube-root-ca.crt" not found Sep 13 00:06:02.238540 kubelet[2719]: E0913 00:06:02.237806 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.238540 kubelet[2719]: E0913 00:06:02.237830 2719 projected.go:194] Error preparing data for projected volume kube-api-access-mqzx6 for pod kube-system/kube-proxy-f9sr7: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.238691 kubelet[2719]: E0913 00:06:02.238678 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-api-access-mqzx6 podName:7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:02.738656956 +0000 UTC m=+4.755462837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mqzx6" (UniqueName: "kubernetes.io/projected/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-api-access-mqzx6") pod "kube-proxy-f9sr7" (UID: "7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922") : configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821362 kubelet[2719]: E0913 00:06:02.821297 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821362 kubelet[2719]: E0913 00:06:02.821360 2719 projected.go:194] Error preparing data for projected volume kube-api-access-mqzx6 for pod kube-system/kube-proxy-f9sr7: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821584 kubelet[2719]: E0913 00:06:02.821407 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-api-access-mqzx6 podName:7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:03.821391426 +0000 UTC m=+5.838197317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mqzx6" (UniqueName: "kubernetes.io/projected/7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922-kube-api-access-mqzx6") pod "kube-proxy-f9sr7" (UID: "7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922") : configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821975 kubelet[2719]: E0913 00:06:02.821297 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821975 kubelet[2719]: E0913 00:06:02.821885 2719 projected.go:194] Error preparing data for projected volume kube-api-access-rfj6h for pod kube-system/cilium-2cwzm: configmap "kube-root-ca.crt" not found Sep 13 00:06:02.821975 kubelet[2719]: E0913 00:06:02.821927 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h podName:90f480a3-a7fa-441c-bdb4-90ae331c9ea7 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:03.821912548 +0000 UTC m=+5.838718439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rfj6h" (UniqueName: "kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h") pod "cilium-2cwzm" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7") : configmap "kube-root-ca.crt" not found Sep 13 00:06:03.226666 kubelet[2719]: I0913 00:06:03.226536 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-cilium-config-path\") pod \"cilium-operator-5d85765b45-n7ff2\" (UID: \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\") " pod="kube-system/cilium-operator-5d85765b45-n7ff2" Sep 13 00:06:03.226666 kubelet[2719]: I0913 00:06:03.226609 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45cc\" (UniqueName: \"kubernetes.io/projected/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-kube-api-access-b45cc\") pod \"cilium-operator-5d85765b45-n7ff2\" (UID: \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\") " pod="kube-system/cilium-operator-5d85765b45-n7ff2" Sep 13 00:06:03.456099 containerd[1617]: time="2025-09-13T00:06:03.455936878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n7ff2,Uid:33b1d426-f591-4dbd-99f3-8414ffcf6d2c,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:03.494495 containerd[1617]: time="2025-09-13T00:06:03.494123019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:03.494495 containerd[1617]: time="2025-09-13T00:06:03.494276245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:03.494707 containerd[1617]: time="2025-09-13T00:06:03.494386575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:03.495950 containerd[1617]: time="2025-09-13T00:06:03.495850283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:03.589202 containerd[1617]: time="2025-09-13T00:06:03.588471169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n7ff2,Uid:33b1d426-f591-4dbd-99f3-8414ffcf6d2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\"" Sep 13 00:06:03.591946 containerd[1617]: time="2025-09-13T00:06:03.591668605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:06:03.913266 containerd[1617]: time="2025-09-13T00:06:03.913140968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9sr7,Uid:7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:03.922128 containerd[1617]: time="2025-09-13T00:06:03.922034404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cwzm,Uid:90f480a3-a7fa-441c-bdb4-90ae331c9ea7,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:03.946631 containerd[1617]: time="2025-09-13T00:06:03.946447549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:03.946631 containerd[1617]: time="2025-09-13T00:06:03.946558661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:03.946631 containerd[1617]: time="2025-09-13T00:06:03.946587579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:03.947516 containerd[1617]: time="2025-09-13T00:06:03.947159580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:03.960663 containerd[1617]: time="2025-09-13T00:06:03.960508690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:03.961095 containerd[1617]: time="2025-09-13T00:06:03.960858859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:03.961095 containerd[1617]: time="2025-09-13T00:06:03.960958407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:03.962060 containerd[1617]: time="2025-09-13T00:06:03.961762921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.035070 containerd[1617]: time="2025-09-13T00:06:04.034700086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9sr7,Uid:7e8b3e9d-5386-4e40-b8b8-4ff9ecda5922,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae3a44fdbf2ddb2aae629eb4aacf4cd582ce491f8438eaa37f00e920d4fc770\"" Sep 13 00:06:04.037020 containerd[1617]: time="2025-09-13T00:06:04.036374145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cwzm,Uid:90f480a3-a7fa-441c-bdb4-90ae331c9ea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\"" Sep 13 00:06:04.038882 containerd[1617]: time="2025-09-13T00:06:04.038861711Z" level=info msg="CreateContainer within sandbox \"6ae3a44fdbf2ddb2aae629eb4aacf4cd582ce491f8438eaa37f00e920d4fc770\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:04.055613 containerd[1617]: time="2025-09-13T00:06:04.055582366Z" level=info msg="CreateContainer within sandbox \"6ae3a44fdbf2ddb2aae629eb4aacf4cd582ce491f8438eaa37f00e920d4fc770\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc4af20d9dfa41e0d903d73e2df8d5b857b4a9ae8778587fb955d543e145130e\"" Sep 13 00:06:04.056268 containerd[1617]: time="2025-09-13T00:06:04.056247519Z" level=info msg="StartContainer for \"bc4af20d9dfa41e0d903d73e2df8d5b857b4a9ae8778587fb955d543e145130e\"" Sep 13 00:06:04.133633 containerd[1617]: time="2025-09-13T00:06:04.133555259Z" level=info msg="StartContainer for \"bc4af20d9dfa41e0d903d73e2df8d5b857b4a9ae8778587fb955d543e145130e\" returns successfully" Sep 13 00:06:04.173856 kubelet[2719]: I0913 00:06:04.173437 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9sr7" podStartSLOduration=2.173409772 podStartE2EDuration="2.173409772s" podCreationTimestamp="2025-09-13 00:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:04.156971268 +0000 UTC m=+6.173777199" watchObservedRunningTime="2025-09-13 00:06:04.173409772 +0000 UTC m=+6.190215683" Sep 13 00:06:04.446615 update_engine[1589]: I20250913 00:06:04.446409 1589 update_attempter.cc:509] Updating boot flags... Sep 13 00:06:04.539048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2967) Sep 13 00:06:04.621998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2969) Sep 13 00:06:05.125610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195083469.mount: Deactivated successfully. Sep 13 00:06:05.798148 containerd[1617]: time="2025-09-13T00:06:05.798104383Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:05.799050 containerd[1617]: time="2025-09-13T00:06:05.798901022Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 00:06:05.800362 containerd[1617]: time="2025-09-13T00:06:05.799967597Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:05.801611 containerd[1617]: time="2025-09-13T00:06:05.801584243Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.209874275s" Sep 13 00:06:05.801667 containerd[1617]: time="2025-09-13T00:06:05.801612749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:06:05.806552 containerd[1617]: time="2025-09-13T00:06:05.806135787Z" level=info msg="CreateContainer within sandbox \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:06:05.807918 containerd[1617]: time="2025-09-13T00:06:05.807879124Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:06:05.817290 containerd[1617]: time="2025-09-13T00:06:05.817263160Z" level=info msg="CreateContainer within sandbox \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\"" Sep 13 00:06:05.819976 containerd[1617]: time="2025-09-13T00:06:05.818584851Z" level=info msg="StartContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\"" Sep 13 00:06:05.858724 containerd[1617]: time="2025-09-13T00:06:05.858685280Z" level=info msg="StartContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" returns successfully" Sep 13 00:06:10.146569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357462468.mount: Deactivated successfully. Sep 13 00:06:11.503567 containerd[1617]: time="2025-09-13T00:06:11.495567818Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:11.509228 containerd[1617]: time="2025-09-13T00:06:11.509177049Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 00:06:11.518047 containerd[1617]: time="2025-09-13T00:06:11.517992832Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:11.520522 containerd[1617]: time="2025-09-13T00:06:11.519700072Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.711795078s" Sep 13 00:06:11.520522 containerd[1617]: time="2025-09-13T00:06:11.519735591Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:06:11.523411 containerd[1617]: time="2025-09-13T00:06:11.523382054Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:06:11.580425 containerd[1617]: time="2025-09-13T00:06:11.580354925Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\"" Sep 13 00:06:11.582139 containerd[1617]: time="2025-09-13T00:06:11.582115329Z" level=info msg="StartContainer for \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\"" Sep 13 00:06:11.780862 containerd[1617]: time="2025-09-13T00:06:11.780599624Z" level=info msg="StartContainer for \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\" returns successfully" Sep 13 00:06:11.844219 containerd[1617]: time="2025-09-13T00:06:11.835715944Z" level=info msg="shim disconnected" id=e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57 namespace=k8s.io Sep 13 00:06:11.844219 containerd[1617]: time="2025-09-13T00:06:11.844211741Z" level=warning msg="cleaning up after shim disconnected" id=e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57 namespace=k8s.io Sep 13 00:06:11.844406 containerd[1617]: time="2025-09-13T00:06:11.844225518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:12.203087 containerd[1617]: time="2025-09-13T00:06:12.202815513Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:06:12.214720 containerd[1617]: time="2025-09-13T00:06:12.214642867Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\"" Sep 13 00:06:12.215393 containerd[1617]: time="2025-09-13T00:06:12.215189755Z" level=info msg="StartContainer for \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\"" Sep 13 00:06:12.237154 kubelet[2719]: I0913 00:06:12.234032 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n7ff2" podStartSLOduration=7.022612698 podStartE2EDuration="9.234015101s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="2025-09-13 00:06:03.59089352 +0000 UTC m=+5.607699431" lastFinishedPulling="2025-09-13 00:06:05.802295954 +0000 UTC m=+7.819101834" observedRunningTime="2025-09-13 00:06:06.229786544 +0000 UTC m=+8.246592435" watchObservedRunningTime="2025-09-13 00:06:12.234015101 +0000 UTC m=+14.250820982" Sep 13 00:06:12.269694 containerd[1617]: time="2025-09-13T00:06:12.269277753Z" level=info msg="StartContainer for \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\" returns successfully" Sep 13 00:06:12.280674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:12.281622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:12.281687 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:12.287096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:12.304639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:12.315850 containerd[1617]: time="2025-09-13T00:06:12.315776292Z" level=info msg="shim disconnected" id=bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd namespace=k8s.io Sep 13 00:06:12.315850 containerd[1617]: time="2025-09-13T00:06:12.315841398Z" level=warning msg="cleaning up after shim disconnected" id=bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd namespace=k8s.io Sep 13 00:06:12.315850 containerd[1617]: time="2025-09-13T00:06:12.315849996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:12.570515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57-rootfs.mount: Deactivated successfully. Sep 13 00:06:13.206187 containerd[1617]: time="2025-09-13T00:06:13.206135319Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:06:13.225874 containerd[1617]: time="2025-09-13T00:06:13.225827774Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\"" Sep 13 00:06:13.226388 containerd[1617]: time="2025-09-13T00:06:13.226370492Z" level=info msg="StartContainer for \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\"" Sep 13 00:06:13.272106 containerd[1617]: time="2025-09-13T00:06:13.271698546Z" level=info msg="StartContainer for \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\" returns successfully" Sep 13 00:06:13.294870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706-rootfs.mount: Deactivated successfully. Sep 13 00:06:13.300021 containerd[1617]: time="2025-09-13T00:06:13.299952499Z" level=info msg="shim disconnected" id=d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706 namespace=k8s.io Sep 13 00:06:13.300390 containerd[1617]: time="2025-09-13T00:06:13.300199000Z" level=warning msg="cleaning up after shim disconnected" id=d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706 namespace=k8s.io Sep 13 00:06:13.300390 containerd[1617]: time="2025-09-13T00:06:13.300229268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:14.211712 containerd[1617]: time="2025-09-13T00:06:14.211412439Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:06:14.233763 containerd[1617]: time="2025-09-13T00:06:14.233701054Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\"" Sep 13 00:06:14.235019 containerd[1617]: time="2025-09-13T00:06:14.234554907Z" level=info msg="StartContainer for \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\"" Sep 13 00:06:14.265485 systemd[1]: run-containerd-runc-k8s.io-76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931-runc.dyiTWK.mount: Deactivated successfully. Sep 13 00:06:14.300973 containerd[1617]: time="2025-09-13T00:06:14.300485865Z" level=info msg="StartContainer for \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\" returns successfully" Sep 13 00:06:14.327639 containerd[1617]: time="2025-09-13T00:06:14.327437956Z" level=info msg="shim disconnected" id=76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931 namespace=k8s.io Sep 13 00:06:14.327639 containerd[1617]: time="2025-09-13T00:06:14.327485740Z" level=warning msg="cleaning up after shim disconnected" id=76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931 namespace=k8s.io Sep 13 00:06:14.327639 containerd[1617]: time="2025-09-13T00:06:14.327493615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:15.219692 containerd[1617]: time="2025-09-13T00:06:15.219656243Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:06:15.229921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931-rootfs.mount: Deactivated successfully. Sep 13 00:06:15.251112 containerd[1617]: time="2025-09-13T00:06:15.251083553Z" level=info msg="CreateContainer within sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\"" Sep 13 00:06:15.252775 containerd[1617]: time="2025-09-13T00:06:15.252713371Z" level=info msg="StartContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\"" Sep 13 00:06:15.350696 containerd[1617]: time="2025-09-13T00:06:15.350625944Z" level=info msg="StartContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" returns successfully" Sep 13 00:06:15.479797 kubelet[2719]: I0913 00:06:15.479663 2719 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:06:15.530350 kubelet[2719]: I0913 00:06:15.529462 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00af175-d629-4f19-935c-9e7d379a301a-config-volume\") pod \"coredns-7c65d6cfc9-4g8jt\" (UID: \"e00af175-d629-4f19-935c-9e7d379a301a\") " pod="kube-system/coredns-7c65d6cfc9-4g8jt" Sep 13 00:06:15.531469 kubelet[2719]: I0913 00:06:15.530490 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ql6d\" (UniqueName: \"kubernetes.io/projected/e00af175-d629-4f19-935c-9e7d379a301a-kube-api-access-5ql6d\") pod \"coredns-7c65d6cfc9-4g8jt\" (UID: \"e00af175-d629-4f19-935c-9e7d379a301a\") " pod="kube-system/coredns-7c65d6cfc9-4g8jt" Sep 13 00:06:15.631664 kubelet[2719]: I0913 00:06:15.631058 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzdq\" (UniqueName: \"kubernetes.io/projected/0f5c4485-08ca-429c-bd86-07af35e8324c-kube-api-access-5fzdq\") pod \"coredns-7c65d6cfc9-5cl8v\" (UID: \"0f5c4485-08ca-429c-bd86-07af35e8324c\") " pod="kube-system/coredns-7c65d6cfc9-5cl8v" Sep 13 00:06:15.631983 kubelet[2719]: I0913 00:06:15.631865 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5c4485-08ca-429c-bd86-07af35e8324c-config-volume\") pod \"coredns-7c65d6cfc9-5cl8v\" (UID: \"0f5c4485-08ca-429c-bd86-07af35e8324c\") " pod="kube-system/coredns-7c65d6cfc9-5cl8v" Sep 13 00:06:15.824038 containerd[1617]: time="2025-09-13T00:06:15.823673769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4g8jt,Uid:e00af175-d629-4f19-935c-9e7d379a301a,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:15.830038 containerd[1617]: time="2025-09-13T00:06:15.830001436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5cl8v,Uid:0f5c4485-08ca-429c-bd86-07af35e8324c,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:16.232264 systemd[1]: run-containerd-runc-k8s.io-fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3-runc.Lwrrz2.mount: Deactivated successfully. Sep 13 00:06:16.241928 kubelet[2719]: I0913 00:06:16.241873 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cwzm" podStartSLOduration=6.759081963 podStartE2EDuration="14.241856965s" podCreationTimestamp="2025-09-13 00:06:02 +0000 UTC" firstStartedPulling="2025-09-13 00:06:04.037847624 +0000 UTC m=+6.054653506" lastFinishedPulling="2025-09-13 00:06:11.520622617 +0000 UTC m=+13.537428508" observedRunningTime="2025-09-13 00:06:16.240791769 +0000 UTC m=+18.257597650" watchObservedRunningTime="2025-09-13 00:06:16.241856965 +0000 UTC m=+18.258662847" Sep 13 00:06:17.373661 systemd-networkd[1257]: cilium_host: Link UP Sep 13 00:06:17.373816 systemd-networkd[1257]: cilium_net: Link UP Sep 13 00:06:17.373974 systemd-networkd[1257]: cilium_net: Gained carrier Sep 13 00:06:17.374125 systemd-networkd[1257]: cilium_host: Gained carrier Sep 13 00:06:17.455123 systemd-networkd[1257]: cilium_vxlan: Link UP Sep 13 00:06:17.455609 systemd-networkd[1257]: cilium_vxlan: Gained carrier Sep 13 00:06:17.736617 kernel: NET: Registered PF_ALG protocol family Sep 13 00:06:17.761591 systemd-networkd[1257]: cilium_net: Gained IPv6LL Sep 13 00:06:17.777483 systemd-networkd[1257]: cilium_host: Gained IPv6LL Sep 13 00:06:18.310761 systemd-networkd[1257]: lxc_health: Link UP Sep 13 00:06:18.318465 systemd-networkd[1257]: lxc_health: Gained carrier Sep 13 00:06:18.729737 systemd-networkd[1257]: cilium_vxlan: Gained IPv6LL Sep 13 00:06:18.903981 systemd-networkd[1257]: lxc6f00f1e4ec5a: Link UP Sep 13 00:06:18.910378 kernel: eth0: renamed from tmpeb856 Sep 13 00:06:18.913950 systemd-networkd[1257]: lxc6f00f1e4ec5a: Gained carrier Sep 13 00:06:18.922307 systemd-networkd[1257]: lxc8f451fe4c080: Link UP Sep 13 00:06:18.928354 kernel: eth0: renamed from tmpdff70 Sep 13 00:06:18.934845 systemd-networkd[1257]: lxc8f451fe4c080: Gained carrier Sep 13 00:06:19.820132 systemd-networkd[1257]: lxc_health: Gained IPv6LL Sep 13 00:06:20.073490 systemd-networkd[1257]: lxc6f00f1e4ec5a: Gained IPv6LL Sep 13 00:06:20.393801 systemd-networkd[1257]: lxc8f451fe4c080: Gained IPv6LL Sep 13 00:06:21.977049 containerd[1617]: time="2025-09-13T00:06:21.976875520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:21.977426 containerd[1617]: time="2025-09-13T00:06:21.977123387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:21.977426 containerd[1617]: time="2025-09-13T00:06:21.977141342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:21.979039 containerd[1617]: time="2025-09-13T00:06:21.977508110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:21.994344 containerd[1617]: time="2025-09-13T00:06:21.992416642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:21.994344 containerd[1617]: time="2025-09-13T00:06:21.992471437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:21.994344 containerd[1617]: time="2025-09-13T00:06:21.992489132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:21.994344 containerd[1617]: time="2025-09-13T00:06:21.992570428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:22.061466 containerd[1617]: time="2025-09-13T00:06:22.061268479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4g8jt,Uid:e00af175-d629-4f19-935c-9e7d379a301a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff70e845471103de63f6b99e9f5fcab8de3db8275aa3e914d12b6cda9320df8\"" Sep 13 00:06:22.067661 containerd[1617]: time="2025-09-13T00:06:22.067633487Z" level=info msg="CreateContainer within sandbox \"dff70e845471103de63f6b99e9f5fcab8de3db8275aa3e914d12b6cda9320df8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:22.100886 containerd[1617]: time="2025-09-13T00:06:22.099666916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5cl8v,Uid:0f5c4485-08ca-429c-bd86-07af35e8324c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb856e9a008e0ff6153cc31102878867f8998128bccae9c1109bddb7fc41dee4\"" Sep 13 00:06:22.100678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099757057.mount: Deactivated successfully. Sep 13 00:06:22.104311 containerd[1617]: time="2025-09-13T00:06:22.103340856Z" level=info msg="CreateContainer within sandbox \"eb856e9a008e0ff6153cc31102878867f8998128bccae9c1109bddb7fc41dee4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:22.104311 containerd[1617]: time="2025-09-13T00:06:22.104120981Z" level=info msg="CreateContainer within sandbox \"dff70e845471103de63f6b99e9f5fcab8de3db8275aa3e914d12b6cda9320df8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eece96eee5b4def2dd35604a7c7e47c7892cb10bfd33ac30c537cfee60e0b910\"" Sep 13 00:06:22.106099 containerd[1617]: time="2025-09-13T00:06:22.105769187Z" level=info msg="StartContainer for \"eece96eee5b4def2dd35604a7c7e47c7892cb10bfd33ac30c537cfee60e0b910\"" Sep 13 00:06:22.122611 containerd[1617]: time="2025-09-13T00:06:22.122587462Z" level=info msg="CreateContainer within sandbox \"eb856e9a008e0ff6153cc31102878867f8998128bccae9c1109bddb7fc41dee4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"797ad2b32a1a38a09039b546330f77dadfe3689176841c071ed84af7f530df10\"" Sep 13 00:06:22.125193 containerd[1617]: time="2025-09-13T00:06:22.125062802Z" level=info msg="StartContainer for \"797ad2b32a1a38a09039b546330f77dadfe3689176841c071ed84af7f530df10\"" Sep 13 00:06:22.182250 containerd[1617]: time="2025-09-13T00:06:22.182213541Z" level=info msg="StartContainer for \"797ad2b32a1a38a09039b546330f77dadfe3689176841c071ed84af7f530df10\" returns successfully" Sep 13 00:06:22.183851 containerd[1617]: time="2025-09-13T00:06:22.182380513Z" level=info msg="StartContainer for \"eece96eee5b4def2dd35604a7c7e47c7892cb10bfd33ac30c537cfee60e0b910\" returns successfully" Sep 13 00:06:22.259092 kubelet[2719]: I0913 00:06:22.258910 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5cl8v" podStartSLOduration=19.258889633 podStartE2EDuration="19.258889633s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:22.258545709 +0000 UTC m=+24.275351611" watchObservedRunningTime="2025-09-13 00:06:22.258889633 +0000 UTC m=+24.275695524" Sep 13 00:06:22.275668 kubelet[2719]: I0913 00:06:22.275594 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4g8jt" podStartSLOduration=19.27557471 podStartE2EDuration="19.27557471s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:22.274976577 +0000 UTC m=+24.291782479" watchObservedRunningTime="2025-09-13 00:06:22.27557471 +0000 UTC m=+24.292380611" Sep 13 00:06:28.102182 kubelet[2719]: I0913 00:06:28.102152 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:21.682813 systemd[1]: Started sshd@7-37.27.206.127:22-147.75.109.163:38390.service - OpenSSH per-connection server daemon (147.75.109.163:38390). Sep 13 00:08:22.691301 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 38390 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:22.693777 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:22.701047 systemd-logind[1583]: New session 8 of user core. Sep 13 00:08:22.712940 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:08:23.917026 sshd[4112]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:23.921162 systemd[1]: sshd@7-37.27.206.127:22-147.75.109.163:38390.service: Deactivated successfully. Sep 13 00:08:23.923886 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:08:23.925455 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:08:23.926763 systemd-logind[1583]: Removed session 8. Sep 13 00:08:29.080556 systemd[1]: Started sshd@8-37.27.206.127:22-147.75.109.163:38400.service - OpenSSH per-connection server daemon (147.75.109.163:38400). Sep 13 00:08:30.045967 sshd[4126]: Accepted publickey for core from 147.75.109.163 port 38400 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:30.048003 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:30.055400 systemd-logind[1583]: New session 9 of user core. Sep 13 00:08:30.061771 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:08:30.792581 sshd[4126]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:30.800383 systemd[1]: sshd@8-37.27.206.127:22-147.75.109.163:38400.service: Deactivated successfully. Sep 13 00:08:30.809586 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:08:30.810893 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:08:30.814696 systemd-logind[1583]: Removed session 9. Sep 13 00:08:35.958940 systemd[1]: Started sshd@9-37.27.206.127:22-147.75.109.163:59410.service - OpenSSH per-connection server daemon (147.75.109.163:59410). Sep 13 00:08:36.923597 sshd[4143]: Accepted publickey for core from 147.75.109.163 port 59410 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:36.924870 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:36.929264 systemd-logind[1583]: New session 10 of user core. Sep 13 00:08:36.936720 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:08:37.676287 sshd[4143]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:37.680277 systemd[1]: sshd@9-37.27.206.127:22-147.75.109.163:59410.service: Deactivated successfully. Sep 13 00:08:37.685471 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:08:37.687618 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:08:37.689870 systemd-logind[1583]: Removed session 10. Sep 13 00:08:37.844439 systemd[1]: Started sshd@10-37.27.206.127:22-147.75.109.163:59414.service - OpenSSH per-connection server daemon (147.75.109.163:59414). Sep 13 00:08:38.827209 sshd[4158]: Accepted publickey for core from 147.75.109.163 port 59414 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:38.828888 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:38.834166 systemd-logind[1583]: New session 11 of user core. Sep 13 00:08:38.839731 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:08:39.625509 sshd[4158]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:39.629772 systemd[1]: sshd@10-37.27.206.127:22-147.75.109.163:59414.service: Deactivated successfully. Sep 13 00:08:39.637026 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:08:39.638682 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:08:39.640304 systemd-logind[1583]: Removed session 11. Sep 13 00:08:39.829634 systemd[1]: Started sshd@11-37.27.206.127:22-147.75.109.163:59418.service - OpenSSH per-connection server daemon (147.75.109.163:59418). Sep 13 00:08:40.918795 sshd[4170]: Accepted publickey for core from 147.75.109.163 port 59418 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:40.920484 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:40.926197 systemd-logind[1583]: New session 12 of user core. Sep 13 00:08:40.928652 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:08:41.732467 sshd[4170]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:41.735225 systemd[1]: sshd@11-37.27.206.127:22-147.75.109.163:59418.service: Deactivated successfully. Sep 13 00:08:41.738662 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:08:41.739700 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:08:41.740991 systemd-logind[1583]: Removed session 12. Sep 13 00:08:46.877802 systemd[1]: Started sshd@12-37.27.206.127:22-147.75.109.163:53036.service - OpenSSH per-connection server daemon (147.75.109.163:53036). Sep 13 00:08:47.844560 sshd[4183]: Accepted publickey for core from 147.75.109.163 port 53036 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:47.845897 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:47.849987 systemd-logind[1583]: New session 13 of user core. Sep 13 00:08:47.853571 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:08:48.577010 sshd[4183]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:48.581078 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:48.582045 systemd[1]: sshd@12-37.27.206.127:22-147.75.109.163:53036.service: Deactivated successfully. Sep 13 00:08:48.585653 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:48.587077 systemd-logind[1583]: Removed session 13. Sep 13 00:08:48.738567 systemd[1]: Started sshd@13-37.27.206.127:22-147.75.109.163:53052.service - OpenSSH per-connection server daemon (147.75.109.163:53052). Sep 13 00:08:49.702175 sshd[4198]: Accepted publickey for core from 147.75.109.163 port 53052 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:49.703873 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:49.708202 systemd-logind[1583]: New session 14 of user core. Sep 13 00:08:49.711623 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:08:50.695255 sshd[4198]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:50.703440 systemd[1]: sshd@13-37.27.206.127:22-147.75.109.163:53052.service: Deactivated successfully. Sep 13 00:08:50.704407 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:50.708476 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:50.710494 systemd-logind[1583]: Removed session 14. Sep 13 00:08:50.859991 systemd[1]: Started sshd@14-37.27.206.127:22-147.75.109.163:34662.service - OpenSSH per-connection server daemon (147.75.109.163:34662). Sep 13 00:08:51.849951 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 34662 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:51.852182 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:51.860179 systemd-logind[1583]: New session 15 of user core. Sep 13 00:08:51.867072 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:08:53.865554 sshd[4210]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:53.869433 systemd[1]: sshd@14-37.27.206.127:22-147.75.109.163:34662.service: Deactivated successfully. Sep 13 00:08:53.874554 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:53.875971 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:53.877398 systemd-logind[1583]: Removed session 15. Sep 13 00:08:54.061665 systemd[1]: Started sshd@15-37.27.206.127:22-147.75.109.163:34664.service - OpenSSH per-connection server daemon (147.75.109.163:34664). Sep 13 00:08:55.130248 sshd[4229]: Accepted publickey for core from 147.75.109.163 port 34664 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:55.131769 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:55.136586 systemd-logind[1583]: New session 16 of user core. Sep 13 00:08:55.146544 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:08:56.097494 sshd[4229]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:56.100615 systemd[1]: sshd@15-37.27.206.127:22-147.75.109.163:34664.service: Deactivated successfully. Sep 13 00:08:56.105767 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:08:56.105997 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:08:56.107853 systemd-logind[1583]: Removed session 16. Sep 13 00:08:56.244543 systemd[1]: Started sshd@16-37.27.206.127:22-147.75.109.163:34678.service - OpenSSH per-connection server daemon (147.75.109.163:34678). Sep 13 00:08:57.205684 sshd[4241]: Accepted publickey for core from 147.75.109.163 port 34678 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:57.207660 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:57.215311 systemd-logind[1583]: New session 17 of user core. Sep 13 00:08:57.225816 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:08:57.944021 sshd[4241]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.947584 systemd[1]: sshd@16-37.27.206.127:22-147.75.109.163:34678.service: Deactivated successfully. Sep 13 00:08:57.955694 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:08:57.956500 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:08:57.958733 systemd-logind[1583]: Removed session 17. Sep 13 00:09:03.109655 systemd[1]: Started sshd@17-37.27.206.127:22-147.75.109.163:44554.service - OpenSSH per-connection server daemon (147.75.109.163:44554). Sep 13 00:09:04.079914 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 44554 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:04.082103 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:04.089593 systemd-logind[1583]: New session 18 of user core. Sep 13 00:09:04.098729 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:09:04.810090 sshd[4259]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:04.812954 systemd[1]: sshd@17-37.27.206.127:22-147.75.109.163:44554.service: Deactivated successfully. Sep 13 00:09:04.815940 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:09:04.816424 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:09:04.818018 systemd-logind[1583]: Removed session 18. Sep 13 00:09:09.978569 systemd[1]: Started sshd@18-37.27.206.127:22-147.75.109.163:44570.service - OpenSSH per-connection server daemon (147.75.109.163:44570). Sep 13 00:09:10.942360 sshd[4276]: Accepted publickey for core from 147.75.109.163 port 44570 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:10.943610 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:10.947799 systemd-logind[1583]: New session 19 of user core. Sep 13 00:09:10.949540 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:09:11.672195 sshd[4276]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:11.675403 systemd[1]: sshd@18-37.27.206.127:22-147.75.109.163:44570.service: Deactivated successfully. Sep 13 00:09:11.678523 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:09:11.679922 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:09:11.680973 systemd-logind[1583]: Removed session 19. Sep 13 00:09:11.837625 systemd[1]: Started sshd@19-37.27.206.127:22-147.75.109.163:45562.service - OpenSSH per-connection server daemon (147.75.109.163:45562). Sep 13 00:09:12.804699 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 45562 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:12.806446 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:12.812089 systemd-logind[1583]: New session 20 of user core. Sep 13 00:09:12.821719 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:09:14.720393 systemd[1]: run-containerd-runc-k8s.io-fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3-runc.clfHd1.mount: Deactivated successfully. Sep 13 00:09:14.748001 containerd[1617]: time="2025-09-13T00:09:14.747956352Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:14.777697 containerd[1617]: time="2025-09-13T00:09:14.777625932Z" level=info msg="StopContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" with timeout 30 (s)" Sep 13 00:09:14.778137 containerd[1617]: time="2025-09-13T00:09:14.778086683Z" level=info msg="Stop container \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" with signal terminated" Sep 13 00:09:14.782024 containerd[1617]: time="2025-09-13T00:09:14.781885424Z" level=info msg="StopContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" with timeout 2 (s)" Sep 13 00:09:14.782237 containerd[1617]: time="2025-09-13T00:09:14.782223037Z" level=info msg="Stop container \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" with signal terminated" Sep 13 00:09:14.788467 systemd-networkd[1257]: lxc_health: Link DOWN Sep 13 00:09:14.788473 systemd-networkd[1257]: lxc_health: Lost carrier Sep 13 00:09:14.819948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a-rootfs.mount: Deactivated successfully. Sep 13 00:09:14.824620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3-rootfs.mount: Deactivated successfully. Sep 13 00:09:14.836486 containerd[1617]: time="2025-09-13T00:09:14.836381788Z" level=info msg="shim disconnected" id=411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a namespace=k8s.io Sep 13 00:09:14.836486 containerd[1617]: time="2025-09-13T00:09:14.836429475Z" level=warning msg="cleaning up after shim disconnected" id=411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a namespace=k8s.io Sep 13 00:09:14.836486 containerd[1617]: time="2025-09-13T00:09:14.836437119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:14.836756 containerd[1617]: time="2025-09-13T00:09:14.836638511Z" level=info msg="shim disconnected" id=fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3 namespace=k8s.io Sep 13 00:09:14.836756 containerd[1617]: time="2025-09-13T00:09:14.836660030Z" level=warning msg="cleaning up after shim disconnected" id=fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3 namespace=k8s.io Sep 13 00:09:14.836756 containerd[1617]: time="2025-09-13T00:09:14.836668015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:14.853567 containerd[1617]: time="2025-09-13T00:09:14.853498448Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:14.854066 containerd[1617]: time="2025-09-13T00:09:14.853923474Z" level=info msg="StopContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" returns successfully" Sep 13 00:09:14.854565 containerd[1617]: time="2025-09-13T00:09:14.854536074Z" level=info msg="StopPodSandbox for \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\"" Sep 13 00:09:14.854609 containerd[1617]: time="2025-09-13T00:09:14.854564727Z" level=info msg="Container to stop \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.856224 containerd[1617]: time="2025-09-13T00:09:14.855593777Z" level=info msg="StopContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" returns successfully" Sep 13 00:09:14.857071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1-shm.mount: Deactivated successfully. Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857139599Z" level=info msg="StopPodSandbox for \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\"" Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857160678Z" level=info msg="Container to stop \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857169685Z" level=info msg="Container to stop \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857177228Z" level=info msg="Container to stop \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857184152Z" level=info msg="Container to stop \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.857461 containerd[1617]: time="2025-09-13T00:09:14.857191035Z" level=info msg="Container to stop \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:14.887187 containerd[1617]: time="2025-09-13T00:09:14.887045837Z" level=info msg="shim disconnected" id=01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5 namespace=k8s.io Sep 13 00:09:14.887187 containerd[1617]: time="2025-09-13T00:09:14.887088715Z" level=warning msg="cleaning up after shim disconnected" id=01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5 namespace=k8s.io Sep 13 00:09:14.887187 containerd[1617]: time="2025-09-13T00:09:14.887096039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:14.887644 containerd[1617]: time="2025-09-13T00:09:14.887617181Z" level=info msg="shim disconnected" id=e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1 namespace=k8s.io Sep 13 00:09:14.887787 containerd[1617]: time="2025-09-13T00:09:14.887691668Z" level=warning msg="cleaning up after shim disconnected" id=e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1 namespace=k8s.io Sep 13 00:09:14.887787 containerd[1617]: time="2025-09-13T00:09:14.887718028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:14.899289 containerd[1617]: time="2025-09-13T00:09:14.899268596Z" level=info msg="TearDown network for sandbox \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" successfully" Sep 13 00:09:14.899476 containerd[1617]: time="2025-09-13T00:09:14.899377276Z" level=info msg="StopPodSandbox for \"01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5\" returns successfully" Sep 13 00:09:14.901020 containerd[1617]: time="2025-09-13T00:09:14.901003950Z" level=info msg="TearDown network for sandbox \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\" successfully" Sep 13 00:09:14.901157 containerd[1617]: time="2025-09-13T00:09:14.901021311Z" level=info msg="StopPodSandbox for \"e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1\" returns successfully" Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058809 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-config-path\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058850 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-xtables-lock\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058869 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-net\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058884 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-bpf-maps\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058899 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-cgroup\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.059664 kubelet[2719]: I0913 00:09:15.058912 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-etc-cni-netd\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058929 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45cc\" (UniqueName: \"kubernetes.io/projected/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-kube-api-access-b45cc\") pod \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\" (UID: \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058943 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hostproc\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058957 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cni-path\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058971 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-clustermesh-secrets\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058983 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-run\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060192 kubelet[2719]: I0913 00:09:15.058997 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-cilium-config-path\") pod \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\" (UID: \"33b1d426-f591-4dbd-99f3-8414ffcf6d2c\") " Sep 13 00:09:15.060346 kubelet[2719]: I0913 00:09:15.059013 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-lib-modules\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060346 kubelet[2719]: I0913 00:09:15.059026 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-kernel\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060346 kubelet[2719]: I0913 00:09:15.059042 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfj6h\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.060346 kubelet[2719]: I0913 00:09:15.059054 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hubble-tls\") pod \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\" (UID: \"90f480a3-a7fa-441c-bdb4-90ae331c9ea7\") " Sep 13 00:09:15.072092 kubelet[2719]: I0913 00:09:15.070175 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hostproc" (OuterVolumeSpecName: "hostproc") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083752 kubelet[2719]: I0913 00:09:15.082401 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:09:15.083752 kubelet[2719]: I0913 00:09:15.082481 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083752 kubelet[2719]: I0913 00:09:15.082502 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083752 kubelet[2719]: I0913 00:09:15.082519 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083752 kubelet[2719]: I0913 00:09:15.082532 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083935 kubelet[2719]: I0913 00:09:15.082544 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.083935 kubelet[2719]: I0913 00:09:15.083475 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cni-path" (OuterVolumeSpecName: "cni-path") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.087391 kubelet[2719]: I0913 00:09:15.086432 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:09:15.087391 kubelet[2719]: I0913 00:09:15.086477 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.087391 kubelet[2719]: I0913 00:09:15.086511 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-kube-api-access-b45cc" (OuterVolumeSpecName: "kube-api-access-b45cc") pod "33b1d426-f591-4dbd-99f3-8414ffcf6d2c" (UID: "33b1d426-f591-4dbd-99f3-8414ffcf6d2c"). InnerVolumeSpecName "kube-api-access-b45cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:09:15.088063 kubelet[2719]: I0913 00:09:15.088037 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:09:15.088641 kubelet[2719]: I0913 00:09:15.088615 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33b1d426-f591-4dbd-99f3-8414ffcf6d2c" (UID: "33b1d426-f591-4dbd-99f3-8414ffcf6d2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:09:15.088769 kubelet[2719]: I0913 00:09:15.088754 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.088843 kubelet[2719]: I0913 00:09:15.088829 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:09:15.089668 kubelet[2719]: I0913 00:09:15.089631 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h" (OuterVolumeSpecName: "kube-api-access-rfj6h") pod "90f480a3-a7fa-441c-bdb4-90ae331c9ea7" (UID: "90f480a3-a7fa-441c-bdb4-90ae331c9ea7"). InnerVolumeSpecName "kube-api-access-rfj6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160081 2719 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hostproc\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160115 2719 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cni-path\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160125 2719 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-clustermesh-secrets\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160136 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-run\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160145 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-cilium-config-path\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160135 kubelet[2719]: I0913 00:09:15.160153 2719 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-lib-modules\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160161 2719 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160170 2719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfj6h\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-kube-api-access-rfj6h\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160179 2719 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-hubble-tls\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160189 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-config-path\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160197 2719 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-xtables-lock\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160204 2719 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-host-proc-sys-net\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160224 2719 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-bpf-maps\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160472 kubelet[2719]: I0913 00:09:15.160233 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-cilium-cgroup\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160799 kubelet[2719]: I0913 00:09:15.160241 2719 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90f480a3-a7fa-441c-bdb4-90ae331c9ea7-etc-cni-netd\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.160799 kubelet[2719]: I0913 00:09:15.160249 2719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45cc\" (UniqueName: \"kubernetes.io/projected/33b1d426-f591-4dbd-99f3-8414ffcf6d2c-kube-api-access-b45cc\") on node \"ci-4081-3-5-n-bd9936ab3a\" DevicePath \"\"" Sep 13 00:09:15.642408 kubelet[2719]: I0913 00:09:15.642358 2719 scope.go:117] "RemoveContainer" containerID="fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3" Sep 13 00:09:15.662216 containerd[1617]: time="2025-09-13T00:09:15.662152948Z" level=info msg="RemoveContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\"" Sep 13 00:09:15.673040 containerd[1617]: time="2025-09-13T00:09:15.672982723Z" level=info msg="RemoveContainer for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" returns successfully" Sep 13 00:09:15.674116 kubelet[2719]: I0913 00:09:15.673678 2719 scope.go:117] "RemoveContainer" containerID="76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931" Sep 13 00:09:15.675781 containerd[1617]: time="2025-09-13T00:09:15.675723544Z" level=info msg="RemoveContainer for \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\"" Sep 13 00:09:15.695127 containerd[1617]: time="2025-09-13T00:09:15.695077116Z" level=info msg="RemoveContainer for \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\" returns successfully" Sep 13 00:09:15.695636 kubelet[2719]: I0913 00:09:15.695396 2719 scope.go:117] "RemoveContainer" containerID="d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706" Sep 13 00:09:15.698380 containerd[1617]: time="2025-09-13T00:09:15.698348785Z" level=info msg="RemoveContainer for \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\"" Sep 13 00:09:15.702764 containerd[1617]: time="2025-09-13T00:09:15.702704548Z" level=info msg="RemoveContainer for \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\" returns successfully" Sep 13 00:09:15.702901 kubelet[2719]: I0913 00:09:15.702870 2719 scope.go:117] "RemoveContainer" containerID="bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd" Sep 13 00:09:15.703943 containerd[1617]: time="2025-09-13T00:09:15.703890177Z" level=info msg="RemoveContainer for \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\"" Sep 13 00:09:15.706909 containerd[1617]: time="2025-09-13T00:09:15.706866462Z" level=info msg="RemoveContainer for \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\" returns successfully" Sep 13 00:09:15.707109 kubelet[2719]: I0913 00:09:15.707056 2719 scope.go:117] "RemoveContainer" containerID="e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57" Sep 13 00:09:15.707885 containerd[1617]: time="2025-09-13T00:09:15.707847053Z" level=info msg="RemoveContainer for \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\"" Sep 13 00:09:15.710994 containerd[1617]: time="2025-09-13T00:09:15.710903776Z" level=info msg="RemoveContainer for \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\" returns successfully" Sep 13 00:09:15.711190 kubelet[2719]: I0913 00:09:15.711017 2719 scope.go:117] "RemoveContainer" containerID="fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3" Sep 13 00:09:15.713965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5-rootfs.mount: Deactivated successfully. Sep 13 00:09:15.714104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01b07bf1d34ff5b23d791128c55a917496ddf590d203912793b233c02e32e9e5-shm.mount: Deactivated successfully. Sep 13 00:09:15.714189 systemd[1]: var-lib-kubelet-pods-90f480a3\x2da7fa\x2d441c\x2dbdb4\x2d90ae331c9ea7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drfj6h.mount: Deactivated successfully. Sep 13 00:09:15.714268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6bfb8bd84ee935498f3e7534b2d25754f5d5dae675545db3b5cdb9d520137b1-rootfs.mount: Deactivated successfully. Sep 13 00:09:15.714984 systemd[1]: var-lib-kubelet-pods-33b1d426\x2df591\x2d4dbd\x2d99f3\x2d8414ffcf6d2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db45cc.mount: Deactivated successfully. Sep 13 00:09:15.715073 systemd[1]: var-lib-kubelet-pods-90f480a3\x2da7fa\x2d441c\x2dbdb4\x2d90ae331c9ea7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:09:15.715146 systemd[1]: var-lib-kubelet-pods-90f480a3\x2da7fa\x2d441c\x2dbdb4\x2d90ae331c9ea7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:09:15.726127 containerd[1617]: time="2025-09-13T00:09:15.720225947Z" level=error msg="ContainerStatus for \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\": not found" Sep 13 00:09:15.732203 kubelet[2719]: E0913 00:09:15.732140 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\": not found" containerID="fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3" Sep 13 00:09:15.748251 kubelet[2719]: I0913 00:09:15.732184 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3"} err="failed to get container status \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\": rpc error: code = NotFound desc = an error occurred when try to find container \"fee1dd499b41d08eba795a16caf0effb00a2dc9bb9658ce33a8efecbca8b6ef3\": not found" Sep 13 00:09:15.748251 kubelet[2719]: I0913 00:09:15.748228 2719 scope.go:117] "RemoveContainer" containerID="76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931" Sep 13 00:09:15.748990 kubelet[2719]: E0913 00:09:15.748874 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\": not found" containerID="76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931" Sep 13 00:09:15.748990 kubelet[2719]: I0913 00:09:15.748894 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931"} err="failed to get container status \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\": rpc error: code = NotFound desc = an error occurred when try to find container \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\": not found" Sep 13 00:09:15.748990 kubelet[2719]: I0913 00:09:15.748926 2719 scope.go:117] "RemoveContainer" containerID="d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706" Sep 13 00:09:15.749115 containerd[1617]: time="2025-09-13T00:09:15.748636380Z" level=error msg="ContainerStatus for \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76617086b2d9d0e2eb69b9110e36eed5c956f6bfd198b681658b29e7a1849931\": not found" Sep 13 00:09:15.749115 containerd[1617]: time="2025-09-13T00:09:15.749077425Z" level=error msg="ContainerStatus for \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\": not found" Sep 13 00:09:15.749658 containerd[1617]: time="2025-09-13T00:09:15.749428533Z" level=error msg="ContainerStatus for \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\": not found" Sep 13 00:09:15.749725 kubelet[2719]: E0913 00:09:15.749173 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\": not found" containerID="d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706" Sep 13 00:09:15.749725 kubelet[2719]: I0913 00:09:15.749189 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706"} err="failed to get container status \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8de2b6295b33aa0ccea1061271ad594d29538d168504213bdc807841df87706\": not found" Sep 13 00:09:15.749725 kubelet[2719]: I0913 00:09:15.749200 2719 scope.go:117] "RemoveContainer" containerID="bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd" Sep 13 00:09:15.749725 kubelet[2719]: E0913 00:09:15.749547 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\": not found" containerID="bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd" Sep 13 00:09:15.749725 kubelet[2719]: I0913 00:09:15.749579 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd"} err="failed to get container status \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdd921405a522846d3f7e2061270bebb58b074eacaf5e50f4499a7dc4c48b9fd\": not found" Sep 13 00:09:15.749725 kubelet[2719]: I0913 00:09:15.749592 2719 scope.go:117] "RemoveContainer" containerID="e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57" Sep 13 00:09:15.749938 kubelet[2719]: E0913 00:09:15.749805 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\": not found" containerID="e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57" Sep 13 00:09:15.749938 kubelet[2719]: I0913 00:09:15.749820 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57"} err="failed to get container status \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\": rpc error: code = NotFound desc = an error occurred when try to find container \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\": not found" Sep 13 00:09:15.749938 kubelet[2719]: I0913 00:09:15.749830 2719 scope.go:117] "RemoveContainer" containerID="411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a" Sep 13 00:09:15.750549 containerd[1617]: time="2025-09-13T00:09:15.749733456Z" level=error msg="ContainerStatus for \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e16b1f57360fbdbe84e7f6aabf7468fb930af317fbccc1f14ffcdc32f1289e57\": not found" Sep 13 00:09:15.751021 containerd[1617]: time="2025-09-13T00:09:15.750964408Z" level=info msg="RemoveContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\"" Sep 13 00:09:15.755440 containerd[1617]: time="2025-09-13T00:09:15.755389188Z" level=info msg="RemoveContainer for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" returns successfully" Sep 13 00:09:15.755699 kubelet[2719]: I0913 00:09:15.755631 2719 scope.go:117] "RemoveContainer" containerID="411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a" Sep 13 00:09:15.755892 containerd[1617]: time="2025-09-13T00:09:15.755842545Z" level=error msg="ContainerStatus for \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\": not found" Sep 13 00:09:15.756011 kubelet[2719]: E0913 00:09:15.755938 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\": not found" containerID="411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a" Sep 13 00:09:15.756011 kubelet[2719]: I0913 00:09:15.755953 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a"} err="failed to get container status \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"411c191c25c296873a0d50e46776a8ef81d85202d99a4a6e5802442d8f253d4a\": not found" Sep 13 00:09:16.102373 kubelet[2719]: I0913 00:09:16.102308 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b1d426-f591-4dbd-99f3-8414ffcf6d2c" path="/var/lib/kubelet/pods/33b1d426-f591-4dbd-99f3-8414ffcf6d2c/volumes" Sep 13 00:09:16.102965 kubelet[2719]: I0913 00:09:16.102935 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" path="/var/lib/kubelet/pods/90f480a3-a7fa-441c-bdb4-90ae331c9ea7/volumes" Sep 13 00:09:16.803607 sshd[4290]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:16.809070 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:09:16.810241 systemd[1]: sshd@19-37.27.206.127:22-147.75.109.163:45562.service: Deactivated successfully. Sep 13 00:09:16.814888 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:09:16.818311 systemd-logind[1583]: Removed session 20. Sep 13 00:09:16.967726 systemd[1]: Started sshd@20-37.27.206.127:22-147.75.109.163:45578.service - OpenSSH per-connection server daemon (147.75.109.163:45578). Sep 13 00:09:17.959849 sshd[4460]: Accepted publickey for core from 147.75.109.163 port 45578 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:17.961964 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:17.969805 systemd-logind[1583]: New session 21 of user core. Sep 13 00:09:17.980738 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:09:18.210737 kubelet[2719]: E0913 00:09:18.200266 2719 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894678 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33b1d426-f591-4dbd-99f3-8414ffcf6d2c" containerName="cilium-operator" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894708 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="clean-cilium-state" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894715 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="cilium-agent" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894720 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="apply-sysctl-overwrites" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894724 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="mount-bpf-fs" Sep 13 00:09:18.894716 kubelet[2719]: E0913 00:09:18.894730 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="mount-cgroup" Sep 13 00:09:18.894934 kubelet[2719]: I0913 00:09:18.894751 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b1d426-f591-4dbd-99f3-8414ffcf6d2c" containerName="cilium-operator" Sep 13 00:09:18.894934 kubelet[2719]: I0913 00:09:18.894760 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f480a3-a7fa-441c-bdb4-90ae331c9ea7" containerName="cilium-agent" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085021 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-etc-cni-netd\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085060 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-xtables-lock\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085079 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-cilium-run\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085094 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-cilium-cgroup\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085105 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-lib-modules\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085265 kubelet[2719]: I0913 00:09:19.085117 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a712ced-9a7f-435e-98ba-20a0201b71bb-cilium-config-path\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085526 kubelet[2719]: I0913 00:09:19.085129 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a712ced-9a7f-435e-98ba-20a0201b71bb-cilium-ipsec-secrets\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085526 kubelet[2719]: I0913 00:09:19.085141 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-host-proc-sys-net\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085526 kubelet[2719]: I0913 00:09:19.085154 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a712ced-9a7f-435e-98ba-20a0201b71bb-hubble-tls\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085526 kubelet[2719]: I0913 00:09:19.085166 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4f2\" (UniqueName: \"kubernetes.io/projected/7a712ced-9a7f-435e-98ba-20a0201b71bb-kube-api-access-xq4f2\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085526 kubelet[2719]: I0913 00:09:19.085179 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-bpf-maps\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085627 kubelet[2719]: I0913 00:09:19.085190 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a712ced-9a7f-435e-98ba-20a0201b71bb-clustermesh-secrets\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085627 kubelet[2719]: I0913 00:09:19.085201 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-hostproc\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085627 kubelet[2719]: I0913 00:09:19.085211 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-cni-path\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.085627 kubelet[2719]: I0913 00:09:19.085222 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a712ced-9a7f-435e-98ba-20a0201b71bb-host-proc-sys-kernel\") pod \"cilium-w8zlf\" (UID: \"7a712ced-9a7f-435e-98ba-20a0201b71bb\") " pod="kube-system/cilium-w8zlf" Sep 13 00:09:19.086007 sshd[4460]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:19.091503 systemd[1]: sshd@20-37.27.206.127:22-147.75.109.163:45578.service: Deactivated successfully. Sep 13 00:09:19.093558 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:09:19.094813 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:09:19.096434 systemd-logind[1583]: Removed session 21. Sep 13 00:09:19.258461 systemd[1]: Started sshd@21-37.27.206.127:22-147.75.109.163:45584.service - OpenSSH per-connection server daemon (147.75.109.163:45584). Sep 13 00:09:19.530989 containerd[1617]: time="2025-09-13T00:09:19.530826717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8zlf,Uid:7a712ced-9a7f-435e-98ba-20a0201b71bb,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:19.566391 containerd[1617]: time="2025-09-13T00:09:19.566253826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:19.566749 containerd[1617]: time="2025-09-13T00:09:19.566698758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:19.566868 containerd[1617]: time="2025-09-13T00:09:19.566837134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:19.568172 containerd[1617]: time="2025-09-13T00:09:19.568083849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:19.678306 containerd[1617]: time="2025-09-13T00:09:19.678219941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8zlf,Uid:7a712ced-9a7f-435e-98ba-20a0201b71bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\"" Sep 13 00:09:19.681936 containerd[1617]: time="2025-09-13T00:09:19.681504702Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:09:19.694820 containerd[1617]: time="2025-09-13T00:09:19.694725532Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e83634c77bff45272f70d4d14f9eac505f17e188ec5309786401522129527c09\"" Sep 13 00:09:19.695711 containerd[1617]: time="2025-09-13T00:09:19.695585081Z" level=info msg="StartContainer for \"e83634c77bff45272f70d4d14f9eac505f17e188ec5309786401522129527c09\"" Sep 13 00:09:19.741929 containerd[1617]: time="2025-09-13T00:09:19.741695037Z" level=info msg="StartContainer for \"e83634c77bff45272f70d4d14f9eac505f17e188ec5309786401522129527c09\" returns successfully" Sep 13 00:09:19.787909 containerd[1617]: time="2025-09-13T00:09:19.787744001Z" level=info msg="shim disconnected" id=e83634c77bff45272f70d4d14f9eac505f17e188ec5309786401522129527c09 namespace=k8s.io Sep 13 00:09:19.788233 containerd[1617]: time="2025-09-13T00:09:19.788070213Z" level=warning msg="cleaning up after shim disconnected" id=e83634c77bff45272f70d4d14f9eac505f17e188ec5309786401522129527c09 namespace=k8s.io Sep 13 00:09:19.788233 containerd[1617]: time="2025-09-13T00:09:19.788087716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:20.222173 sshd[4477]: Accepted publickey for core from 147.75.109.163 port 45584 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:20.224218 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:20.231983 systemd-logind[1583]: New session 22 of user core. Sep 13 00:09:20.234715 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:09:20.669127 containerd[1617]: time="2025-09-13T00:09:20.668989469Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:09:20.689940 containerd[1617]: time="2025-09-13T00:09:20.689436739Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202\"" Sep 13 00:09:20.690537 containerd[1617]: time="2025-09-13T00:09:20.690411992Z" level=info msg="StartContainer for \"3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202\"" Sep 13 00:09:20.748533 containerd[1617]: time="2025-09-13T00:09:20.748489633Z" level=info msg="StartContainer for \"3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202\" returns successfully" Sep 13 00:09:20.776749 containerd[1617]: time="2025-09-13T00:09:20.776681040Z" level=info msg="shim disconnected" id=3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202 namespace=k8s.io Sep 13 00:09:20.776749 containerd[1617]: time="2025-09-13T00:09:20.776739429Z" level=warning msg="cleaning up after shim disconnected" id=3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202 namespace=k8s.io Sep 13 00:09:20.776749 containerd[1617]: time="2025-09-13T00:09:20.776748346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:20.893463 sshd[4477]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:20.897219 systemd[1]: sshd@21-37.27.206.127:22-147.75.109.163:45584.service: Deactivated successfully. Sep 13 00:09:20.901493 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:09:20.902172 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:09:20.903371 systemd-logind[1583]: Removed session 22. Sep 13 00:09:21.056554 systemd[1]: Started sshd@22-37.27.206.127:22-147.75.109.163:33182.service - OpenSSH per-connection server daemon (147.75.109.163:33182). Sep 13 00:09:21.191263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d0f0bb779a467e8ba1fac26d888ea7502ad91f93bfbcbc81537bbb6961f6202-rootfs.mount: Deactivated successfully. Sep 13 00:09:21.673131 containerd[1617]: time="2025-09-13T00:09:21.673014752Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:09:21.688973 containerd[1617]: time="2025-09-13T00:09:21.688930801Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43\"" Sep 13 00:09:21.689629 containerd[1617]: time="2025-09-13T00:09:21.689544085Z" level=info msg="StartContainer for \"89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43\"" Sep 13 00:09:21.780350 containerd[1617]: time="2025-09-13T00:09:21.779953540Z" level=info msg="StartContainer for \"89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43\" returns successfully" Sep 13 00:09:21.794145 containerd[1617]: time="2025-09-13T00:09:21.794089705Z" level=info msg="shim disconnected" id=89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43 namespace=k8s.io Sep 13 00:09:21.794145 containerd[1617]: time="2025-09-13T00:09:21.794135400Z" level=warning msg="cleaning up after shim disconnected" id=89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43 namespace=k8s.io Sep 13 00:09:21.794145 containerd[1617]: time="2025-09-13T00:09:21.794143025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:21.867442 kubelet[2719]: I0913 00:09:21.867380 2719 setters.go:600] "Node became not ready" node="ci-4081-3-5-n-bd9936ab3a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:09:21Z","lastTransitionTime":"2025-09-13T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:09:22.023889 sshd[4647]: Accepted publickey for core from 147.75.109.163 port 33182 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:22.025659 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:22.030985 systemd-logind[1583]: New session 23 of user core. Sep 13 00:09:22.038731 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:09:22.191583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d6a5889be84f56b094ff7a89e5b464fc9805d589bc0a7bdb674c3cced35f43-rootfs.mount: Deactivated successfully. Sep 13 00:09:22.678747 containerd[1617]: time="2025-09-13T00:09:22.678631146Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:09:22.696939 containerd[1617]: time="2025-09-13T00:09:22.696201635Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e\"" Sep 13 00:09:22.698136 containerd[1617]: time="2025-09-13T00:09:22.697214999Z" level=info msg="StartContainer for \"31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e\"" Sep 13 00:09:22.762596 containerd[1617]: time="2025-09-13T00:09:22.762521356Z" level=info msg="StartContainer for \"31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e\" returns successfully" Sep 13 00:09:22.779973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e-rootfs.mount: Deactivated successfully. Sep 13 00:09:22.790169 containerd[1617]: time="2025-09-13T00:09:22.790019413Z" level=info msg="shim disconnected" id=31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e namespace=k8s.io Sep 13 00:09:22.790169 containerd[1617]: time="2025-09-13T00:09:22.790157137Z" level=warning msg="cleaning up after shim disconnected" id=31f7797320b20951f521bac87e1be3fef52b05a6bda1a62019117c3c6d376c7e namespace=k8s.io Sep 13 00:09:22.790383 containerd[1617]: time="2025-09-13T00:09:22.790173698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:23.212519 kubelet[2719]: E0913 00:09:23.212457 2719 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:09:23.682349 containerd[1617]: time="2025-09-13T00:09:23.682163583Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:09:23.697002 containerd[1617]: time="2025-09-13T00:09:23.696477433Z" level=info msg="CreateContainer within sandbox \"a3d904f5135c49370dc4c4eb4bfa74d5213cce198c9036a4efcd277948c3baf4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1b6bd938349e449b2c7c84826d40c1d0f505bed6cf384d8c67e5b3b17ca8933\"" Sep 13 00:09:23.698283 containerd[1617]: time="2025-09-13T00:09:23.698243762Z" level=info msg="StartContainer for \"c1b6bd938349e449b2c7c84826d40c1d0f505bed6cf384d8c67e5b3b17ca8933\"" Sep 13 00:09:23.751397 containerd[1617]: time="2025-09-13T00:09:23.751128996Z" level=info msg="StartContainer for \"c1b6bd938349e449b2c7c84826d40c1d0f505bed6cf384d8c67e5b3b17ca8933\" returns successfully" Sep 13 00:09:24.162833 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:09:24.697958 kubelet[2719]: I0913 00:09:24.697721 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w8zlf" podStartSLOduration=6.697702718 podStartE2EDuration="6.697702718s" podCreationTimestamp="2025-09-13 00:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:24.697564371 +0000 UTC m=+206.714370252" watchObservedRunningTime="2025-09-13 00:09:24.697702718 +0000 UTC m=+206.714508609" Sep 13 00:09:27.090909 systemd-networkd[1257]: lxc_health: Link UP Sep 13 00:09:27.094555 systemd-networkd[1257]: lxc_health: Gained carrier Sep 13 00:09:27.295414 systemd[1]: run-containerd-runc-k8s.io-c1b6bd938349e449b2c7c84826d40c1d0f505bed6cf384d8c67e5b3b17ca8933-runc.V4eF4R.mount: Deactivated successfully. Sep 13 00:09:27.354432 kubelet[2719]: E0913 00:09:27.354393 2719 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45102->127.0.0.1:36973: write tcp 127.0.0.1:45102->127.0.0.1:36973: write: broken pipe Sep 13 00:09:28.681461 systemd-networkd[1257]: lxc_health: Gained IPv6LL Sep 13 00:09:31.615144 systemd[1]: run-containerd-runc-k8s.io-c1b6bd938349e449b2c7c84826d40c1d0f505bed6cf384d8c67e5b3b17ca8933-runc.LO76hw.mount: Deactivated successfully. Sep 13 00:09:31.660681 kubelet[2719]: E0913 00:09:31.660643 2719 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58676->127.0.0.1:36973: write tcp 127.0.0.1:58676->127.0.0.1:36973: write: broken pipe Sep 13 00:09:33.920216 sshd[4647]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:33.922491 systemd[1]: sshd@22-37.27.206.127:22-147.75.109.163:33182.service: Deactivated successfully. Sep 13 00:09:33.926719 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:09:33.928264 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:09:33.929257 systemd-logind[1583]: Removed session 23. Sep 13 00:09:50.014998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec-rootfs.mount: Deactivated successfully. Sep 13 00:09:50.022029 containerd[1617]: time="2025-09-13T00:09:50.021970756Z" level=info msg="shim disconnected" id=0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec namespace=k8s.io Sep 13 00:09:50.022029 containerd[1617]: time="2025-09-13T00:09:50.022021451Z" level=warning msg="cleaning up after shim disconnected" id=0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec namespace=k8s.io Sep 13 00:09:50.022029 containerd[1617]: time="2025-09-13T00:09:50.022030318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:50.190950 kubelet[2719]: E0913 00:09:50.190889 2719 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38812->10.0.0.2:2379: read: connection timed out" Sep 13 00:09:50.740895 kubelet[2719]: I0913 00:09:50.740809 2719 scope.go:117] "RemoveContainer" containerID="0974bdb7a2b9c25be73b6da00688662ffb42e1add2c15394db5a34b89030c4ec" Sep 13 00:09:50.747245 containerd[1617]: time="2025-09-13T00:09:50.747189555Z" level=info msg="CreateContainer within sandbox \"9cff485006ec7b20e041ae65bb72d4d7fe9c2c1613221dbebfbd58e05a9f0e87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:09:50.766777 containerd[1617]: time="2025-09-13T00:09:50.766619214Z" level=info msg="CreateContainer within sandbox \"9cff485006ec7b20e041ae65bb72d4d7fe9c2c1613221dbebfbd58e05a9f0e87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"62cb91cedc2342a543977309de59cdd59114642b80f55dfe7bee2183cb898d64\"" Sep 13 00:09:50.767240 containerd[1617]: time="2025-09-13T00:09:50.767153108Z" level=info msg="StartContainer for \"62cb91cedc2342a543977309de59cdd59114642b80f55dfe7bee2183cb898d64\"" Sep 13 00:09:50.873678 containerd[1617]: time="2025-09-13T00:09:50.873631545Z" level=info msg="StartContainer for \"62cb91cedc2342a543977309de59cdd59114642b80f55dfe7bee2183cb898d64\" returns successfully" Sep 13 00:09:53.781006 kubelet[2719]: E0913 00:09:53.773620 2719 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38620->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-bd9936ab3a.1864af034fec1197 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-bd9936ab3a,UID:f65af354ae7f60e5a243e0999f4f1671,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-bd9936ab3a,},FirstTimestamp:2025-09-13 00:09:43.341969815 +0000 UTC m=+225.358775736,LastTimestamp:2025-09-13 00:09:43.341969815 +0000 UTC m=+225.358775736,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-bd9936ab3a,}"