Sep 13 00:05:05.944015 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:05:05.944034 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:05.944041 kernel: BIOS-provided physical RAM map: Sep 13 00:05:05.944047 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:05:05.944051 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:05:05.944055 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:05:05.944061 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 13 00:05:05.944065 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 13 00:05:05.944071 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:05:05.944076 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:05:05.944080 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:05:05.944085 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:05:05.944089 kernel: NX (Execute Disable) protection: active Sep 13 00:05:05.944094 kernel: APIC: Static calls initialized Sep 13 00:05:05.944100 kernel: SMBIOS 2.8 present. Sep 13 00:05:05.944106 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 13 00:05:05.944110 kernel: Hypervisor detected: KVM Sep 13 00:05:05.944115 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:05:05.944120 kernel: kvm-clock: using sched offset of 3038881647 cycles Sep 13 00:05:05.944125 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:05:05.944130 kernel: tsc: Detected 2445.404 MHz processor Sep 13 00:05:05.944136 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:05:05.944141 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:05:05.944147 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 13 00:05:05.944152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:05:05.944157 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:05:05.944162 kernel: Using GB pages for direct mapping Sep 13 00:05:05.944167 kernel: ACPI: Early table checksum verification disabled Sep 13 00:05:05.944172 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 13 00:05:05.944176 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944181 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944186 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944192 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 13 00:05:05.944197 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944202 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944207 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944212 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:05.944217 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 13 00:05:05.944222 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 13 00:05:05.944227 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 13 00:05:05.944235 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 13 00:05:05.944241 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 13 00:05:05.944246 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 13 00:05:05.944251 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 13 00:05:05.944257 kernel: No NUMA configuration found Sep 13 00:05:05.944262 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 13 00:05:05.944268 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 13 00:05:05.944274 kernel: Zone ranges: Sep 13 00:05:05.944279 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:05:05.944284 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 13 00:05:05.944289 kernel: Normal empty Sep 13 00:05:05.944295 kernel: Movable zone start for each node Sep 13 00:05:05.944300 kernel: Early memory node ranges Sep 13 00:05:05.944305 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:05:05.944310 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 13 00:05:05.944315 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 13 00:05:05.944322 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:05:05.944327 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:05:05.944332 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:05:05.944337 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:05:05.944342 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:05:05.944348 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:05:05.944353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:05:05.944358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:05:05.944363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:05:05.944370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:05:05.944375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:05:05.944380 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:05:05.944385 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:05:05.944390 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:05:05.944396 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:05:05.944401 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:05:05.944406 kernel: Booting paravirtualized kernel on KVM Sep 13 00:05:05.944411 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:05:05.944418 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:05:05.944423 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:05:05.944428 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:05:05.944434 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:05:05.944439 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:05:05.944445 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:05.944450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:05:05.944455 kernel: random: crng init done Sep 13 00:05:05.944462 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:05:05.944467 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:05:05.944472 kernel: Fallback order for Node 0: 0 Sep 13 00:05:05.944477 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 13 00:05:05.944483 kernel: Policy zone: DMA32 Sep 13 00:05:05.944488 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:05:05.944493 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:05:05.944499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:05:05.944504 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:05:05.944510 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:05:05.944515 kernel: Dynamic Preempt: voluntary Sep 13 00:05:05.944521 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:05:05.944526 kernel: rcu: RCU event tracing is enabled. Sep 13 00:05:05.944532 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:05:05.944537 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:05:05.944542 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:05:05.944548 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:05:05.944553 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:05:05.944558 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:05:05.945587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:05:05.945593 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:05:05.945598 kernel: Console: colour VGA+ 80x25 Sep 13 00:05:05.945603 kernel: printk: console [tty0] enabled Sep 13 00:05:05.945608 kernel: printk: console [ttyS0] enabled Sep 13 00:05:05.945614 kernel: ACPI: Core revision 20230628 Sep 13 00:05:05.945619 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:05:05.945624 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:05:05.945638 kernel: x2apic enabled Sep 13 00:05:05.945646 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:05:05.945651 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:05:05.945657 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:05:05.945662 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Sep 13 00:05:05.945667 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:05:05.945672 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:05:05.945678 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:05:05.945683 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:05:05.945694 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:05:05.945699 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:05:05.945705 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:05:05.945712 kernel: active return thunk: retbleed_return_thunk Sep 13 00:05:05.945717 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:05:05.945723 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:05:05.945728 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:05:05.945734 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:05:05.945739 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:05:05.945746 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:05:05.945752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:05:05.945758 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:05:05.945763 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:05:05.945769 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:05:05.945775 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:05:05.945780 kernel: landlock: Up and running. Sep 13 00:05:05.945785 kernel: SELinux: Initializing. Sep 13 00:05:05.945792 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:05.945798 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:05.945803 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:05:05.945809 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:05.945815 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:05.945821 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:05.945826 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:05:05.945832 kernel: ... version: 0 Sep 13 00:05:05.945837 kernel: ... bit width: 48 Sep 13 00:05:05.945844 kernel: ... generic registers: 6 Sep 13 00:05:05.945849 kernel: ... value mask: 0000ffffffffffff Sep 13 00:05:05.945855 kernel: ... max period: 00007fffffffffff Sep 13 00:05:05.945861 kernel: ... fixed-purpose events: 0 Sep 13 00:05:05.945866 kernel: ... event mask: 000000000000003f Sep 13 00:05:05.945871 kernel: signal: max sigframe size: 1776 Sep 13 00:05:05.945877 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:05:05.945883 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:05:05.945888 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:05:05.945895 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:05:05.945901 kernel: .... node #0, CPUs: #1 Sep 13 00:05:05.945906 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:05:05.945911 kernel: smpboot: Max logical packages: 1 Sep 13 00:05:05.945917 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Sep 13 00:05:05.945923 kernel: devtmpfs: initialized Sep 13 00:05:05.945928 kernel: x86/mm: Memory block size: 128MB Sep 13 00:05:05.945934 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:05:05.945940 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:05:05.945947 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:05:05.945952 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:05:05.945958 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:05:05.945963 kernel: audit: type=2000 audit(1757721905.389:1): state=initialized audit_enabled=0 res=1 Sep 13 00:05:05.945969 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:05:05.945974 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:05:05.945980 kernel: cpuidle: using governor menu Sep 13 00:05:05.945985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:05:05.945991 kernel: dca service started, version 1.12.1 Sep 13 00:05:05.945997 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:05:05.946003 kernel: PCI: Using configuration type 1 for base access Sep 13 00:05:05.946009 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:05:05.946014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:05:05.946020 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:05:05.946025 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:05:05.946031 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:05:05.946037 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:05:05.946042 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:05:05.946049 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:05:05.946055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:05:05.946060 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:05:05.946065 kernel: ACPI: Interpreter enabled Sep 13 00:05:05.946071 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:05:05.946076 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:05:05.946082 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:05:05.946088 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:05:05.946093 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:05:05.946100 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:05:05.946212 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:05:05.946286 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:05:05.946350 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:05:05.946359 kernel: PCI host bridge to bus 0000:00 Sep 13 00:05:05.946424 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:05:05.946481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:05:05.946540 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:05:05.946621 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 13 00:05:05.946688 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:05:05.946743 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:05:05.946796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:05:05.946871 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:05:05.946951 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:05:05.947015 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 13 00:05:05.947077 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 13 00:05:05.947140 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 13 00:05:05.947205 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 13 00:05:05.947268 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:05:05.947337 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.947406 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 13 00:05:05.947476 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.947538 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 13 00:05:05.948421 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.948496 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 13 00:05:05.948590 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.948679 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 13 00:05:05.948750 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.948813 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 13 00:05:05.948880 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.948942 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 13 00:05:05.949009 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.949076 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 13 00:05:05.949148 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.949210 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 13 00:05:05.949277 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:05:05.949339 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 13 00:05:05.949405 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:05:05.949471 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:05:05.949539 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:05:05.949640 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 13 00:05:05.949707 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 13 00:05:05.949775 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:05:05.949837 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:05:05.949909 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:05:05.949979 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 13 00:05:05.950043 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 00:05:05.950105 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 13 00:05:05.950167 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:05:05.950228 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:05.950288 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:05.950358 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:05:05.950427 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 13 00:05:05.950490 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:05:05.950551 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:05.950732 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:05.950807 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:05:05.950872 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 13 00:05:05.950941 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 13 00:05:05.951003 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:05:05.951062 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:05.951122 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:05.951190 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:05:05.951253 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 00:05:05.951315 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:05:05.951380 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:05.951441 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:05.951512 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:05:05.951595 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 13 00:05:05.951671 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:05:05.951733 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:05.951794 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:05.951862 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:05:05.951932 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 13 00:05:05.951994 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 13 00:05:05.952055 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:05:05.952115 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:05:05.952175 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:05.952183 kernel: acpiphp: Slot [0] registered Sep 13 00:05:05.952249 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:05:05.952318 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 13 00:05:05.952381 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 13 00:05:05.952444 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 13 00:05:05.952506 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:05:05.952609 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:05.952687 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:05.952697 kernel: acpiphp: Slot [0-2] registered Sep 13 00:05:05.952757 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:05:05.952822 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:05.952882 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:05.952890 kernel: acpiphp: Slot [0-3] registered Sep 13 00:05:05.952948 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:05:05.953007 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:05.953067 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:05.953075 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:05:05.953080 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:05:05.953089 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:05:05.953095 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:05:05.953100 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:05:05.953106 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:05:05.953111 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:05:05.953117 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:05:05.953122 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:05:05.953128 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:05:05.953134 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:05:05.953141 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:05:05.953146 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:05:05.953152 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:05:05.953157 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:05:05.953163 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:05:05.953168 kernel: iommu: Default domain type: Translated Sep 13 00:05:05.953174 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:05:05.953179 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:05:05.953185 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:05:05.953191 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:05:05.953198 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 13 00:05:05.953259 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:05:05.953319 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:05:05.953379 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:05:05.953388 kernel: vgaarb: loaded Sep 13 00:05:05.953394 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:05:05.953400 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:05:05.953405 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:05:05.953413 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:05:05.953419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:05:05.953424 kernel: pnp: PnP ACPI init Sep 13 00:05:05.953492 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:05:05.953501 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:05:05.953507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:05:05.953513 kernel: NET: Registered PF_INET protocol family Sep 13 00:05:05.953518 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:05:05.953526 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:05:05.953532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:05:05.953538 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:05:05.953543 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:05:05.953549 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:05:05.953555 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:05.953573 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:05.953579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:05:05.953585 kernel: NET: Registered PF_XDP protocol family Sep 13 00:05:05.953665 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:05:05.953730 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:05:05.953792 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:05:05.953854 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:05:05.953915 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:05:05.953975 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:05:05.954035 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:05:05.954101 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:05.954161 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:05.954221 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:05:05.954281 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:05.954341 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:05.954401 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:05:05.954462 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:05.954522 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:05.954605 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:05:05.954680 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:05.954743 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:05.954804 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:05:05.954865 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:05.954925 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:05.954986 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:05:05.955052 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:05:05.955125 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:05.955194 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:05:05.955256 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 13 00:05:05.955317 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:05.955377 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:05.955438 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:05:05.955499 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 13 00:05:05.955560 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:05.956320 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:05.956422 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:05:05.956502 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 13 00:05:05.956583 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:05.956663 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:05.956729 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:05:05.956784 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:05:05.956838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:05:05.956890 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 13 00:05:05.956943 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:05:05.956997 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:05:05.957065 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 00:05:05.957124 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:05:05.957185 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 00:05:05.957243 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:05:05.957306 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 00:05:05.957363 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:05:05.957432 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 00:05:05.957490 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:05:05.957553 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 00:05:05.957643 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:05:05.957709 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 00:05:05.957767 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:05:05.957828 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 13 00:05:05.957890 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 00:05:05.959739 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:05:05.959810 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 13 00:05:05.959869 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 13 00:05:05.959925 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:05:05.959987 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 13 00:05:05.960048 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 00:05:05.960103 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:05:05.960112 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:05:05.960119 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:05:05.960125 kernel: Initialise system trusted keyrings Sep 13 00:05:05.960131 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:05:05.960137 kernel: Key type asymmetric registered Sep 13 00:05:05.960143 kernel: Asymmetric key parser 'x509' registered Sep 13 00:05:05.960149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:05:05.960158 kernel: io scheduler mq-deadline registered Sep 13 00:05:05.960164 kernel: io scheduler kyber registered Sep 13 00:05:05.960170 kernel: io scheduler bfq registered Sep 13 00:05:05.960234 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 00:05:05.960299 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 00:05:05.960361 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 00:05:05.960423 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 00:05:05.960485 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 00:05:05.961281 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 00:05:05.961364 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 00:05:05.961489 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 00:05:05.961559 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 00:05:05.961735 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 00:05:05.961798 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 00:05:05.961860 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 00:05:05.961921 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 00:05:05.961982 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 00:05:05.962051 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 00:05:05.962114 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 00:05:05.962123 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:05:05.962183 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 13 00:05:05.962243 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 13 00:05:05.962252 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:05:05.962258 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 13 00:05:05.962264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:05:05.962274 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:05:05.962280 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:05:05.962287 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:05:05.962293 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:05:05.962299 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:05:05.962365 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:05:05.962423 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:05:05.962480 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:05:05 UTC (1757721905) Sep 13 00:05:05.962604 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:05:05.962616 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:05:05.962623 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:05:05.962639 kernel: Segment Routing with IPv6 Sep 13 00:05:05.962645 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:05:05.962651 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:05:05.962657 kernel: Key type dns_resolver registered Sep 13 00:05:05.962663 kernel: IPI shorthand broadcast: enabled Sep 13 00:05:05.962672 kernel: sched_clock: Marking stable (1260011647, 133592171)->(1402549025, -8945207) Sep 13 00:05:05.962680 kernel: registered taskstats version 1 Sep 13 00:05:05.962686 kernel: Loading compiled-in X.509 certificates Sep 13 00:05:05.962692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:05:05.962698 kernel: Key type .fscrypt registered Sep 13 00:05:05.962704 kernel: Key type fscrypt-provisioning registered Sep 13 00:05:05.962710 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:05:05.962716 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:05:05.962722 kernel: ima: No architecture policies found Sep 13 00:05:05.962729 kernel: clk: Disabling unused clocks Sep 13 00:05:05.962735 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:05:05.962741 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:05:05.962746 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:05:05.962752 kernel: Run /init as init process Sep 13 00:05:05.962758 kernel: with arguments: Sep 13 00:05:05.962764 kernel: /init Sep 13 00:05:05.962770 kernel: with environment: Sep 13 00:05:05.962776 kernel: HOME=/ Sep 13 00:05:05.962781 kernel: TERM=linux Sep 13 00:05:05.962788 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:05:05.962796 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:05.962804 systemd[1]: Detected virtualization kvm. Sep 13 00:05:05.962811 systemd[1]: Detected architecture x86-64. Sep 13 00:05:05.962817 systemd[1]: Running in initrd. Sep 13 00:05:05.962823 systemd[1]: No hostname configured, using default hostname. Sep 13 00:05:05.962829 systemd[1]: Hostname set to . Sep 13 00:05:05.962837 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:05.962843 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:05:05.962849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:05.962856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:05.962863 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:05:05.962892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:05.962900 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:05:05.962907 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:05:05.962916 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:05:05.962922 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:05:05.962929 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:05.962935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:05.962941 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:05.962947 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:05.962954 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:05.962961 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:05.962967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:05.962974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:05.962980 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:05:05.962986 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:05:05.962993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:05.962999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:05.963005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:05.963011 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:05.963019 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:05:05.963025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:05.963032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:05:05.963038 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:05:05.963044 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:05.963050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:05.963070 systemd-journald[188]: Collecting audit messages is disabled. Sep 13 00:05:05.963088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:05.963094 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:05.963101 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:05.963107 systemd-journald[188]: Journal started Sep 13 00:05:05.963124 systemd-journald[188]: Runtime Journal (/run/log/journal/180da8fa662c48a8ae65f72b7fda0718) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:05:05.961412 systemd-modules-load[189]: Inserted module 'overlay' Sep 13 00:05:05.967303 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:05.968258 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:05:05.975710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:05:06.013994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:05:06.014021 kernel: Bridge firewalling registered Sep 13 00:05:05.985993 systemd-modules-load[189]: Inserted module 'br_netfilter' Sep 13 00:05:06.018748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:06.020095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:06.020938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:06.026756 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:06.028712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:06.030655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:06.031317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:06.033730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:06.040820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:06.044714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:06.047908 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:06.049193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:06.063619 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:05:06.074449 dracut-cmdline[224]: dracut-dracut-053 Sep 13 00:05:06.076722 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:06.081618 systemd-resolved[218]: Positive Trust Anchors: Sep 13 00:05:06.082207 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:06.082236 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:06.089971 systemd-resolved[218]: Defaulting to hostname 'linux'. Sep 13 00:05:06.090721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:06.091402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:06.131587 kernel: SCSI subsystem initialized Sep 13 00:05:06.138611 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:05:06.147604 kernel: iscsi: registered transport (tcp) Sep 13 00:05:06.163901 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:05:06.163967 kernel: QLogic iSCSI HBA Driver Sep 13 00:05:06.194371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:06.200688 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:05:06.220898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:05:06.220966 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:05:06.220979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:05:06.259595 kernel: raid6: avx2x4 gen() 32828 MB/s Sep 13 00:05:06.276616 kernel: raid6: avx2x2 gen() 30916 MB/s Sep 13 00:05:06.293731 kernel: raid6: avx2x1 gen() 25942 MB/s Sep 13 00:05:06.293809 kernel: raid6: using algorithm avx2x4 gen() 32828 MB/s Sep 13 00:05:06.311806 kernel: raid6: .... xor() 4970 MB/s, rmw enabled Sep 13 00:05:06.311884 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:05:06.333608 kernel: xor: automatically using best checksumming function avx Sep 13 00:05:06.481593 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:05:06.492592 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:06.498734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:06.511861 systemd-udevd[406]: Using default interface naming scheme 'v255'. Sep 13 00:05:06.515576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:06.523763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:05:06.536605 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 13 00:05:06.561902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:06.570740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:06.612178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:06.617743 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:05:06.628884 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:06.634371 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:06.636062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:06.636815 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:06.640718 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:05:06.655929 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:06.687852 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:05:06.693667 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:05:06.730007 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:05:06.738920 kernel: ACPI: bus type USB registered Sep 13 00:05:06.738969 kernel: usbcore: registered new interface driver usbfs Sep 13 00:05:06.740117 kernel: usbcore: registered new interface driver hub Sep 13 00:05:06.741215 kernel: usbcore: registered new device driver usb Sep 13 00:05:06.741921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:06.742681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:06.744002 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:06.745775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:06.745880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:06.748715 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:06.752614 kernel: libata version 3.00 loaded. Sep 13 00:05:06.756274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:06.766693 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:05:06.766738 kernel: AES CTR mode by8 optimization enabled Sep 13 00:05:06.788592 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:05:06.788796 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:05:06.788892 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:05:06.789589 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:05:06.789714 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:05:06.789796 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:05:06.796048 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:05:06.796178 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:05:06.796189 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:05:06.796275 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:05:06.796351 kernel: scsi host1: ahci Sep 13 00:05:06.797479 kernel: hub 1-0:1.0: USB hub found Sep 13 00:05:06.797647 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:05:06.799307 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:05:06.799418 kernel: scsi host2: ahci Sep 13 00:05:06.799580 kernel: scsi host3: ahci Sep 13 00:05:06.799750 kernel: hub 2-0:1.0: USB hub found Sep 13 00:05:06.799843 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:05:06.800585 kernel: scsi host4: ahci Sep 13 00:05:06.801678 kernel: scsi host5: ahci Sep 13 00:05:06.802837 kernel: scsi host6: ahci Sep 13 00:05:06.802923 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Sep 13 00:05:06.802932 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Sep 13 00:05:06.802939 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Sep 13 00:05:06.802946 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Sep 13 00:05:06.802953 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Sep 13 00:05:06.802960 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Sep 13 00:05:06.859864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:06.869730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:06.879720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:07.039597 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:05:07.121589 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:07.121673 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:07.121685 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:07.121694 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:07.121703 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:05:07.122791 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:05:07.124993 kernel: ata1.00: applying bridge limits Sep 13 00:05:07.127771 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:05:07.128586 kernel: ata1.00: configured for UDMA/100 Sep 13 00:05:07.130949 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:05:07.182467 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 13 00:05:07.186632 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:05:07.194777 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:05:07.195043 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 13 00:05:07.195226 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:05:07.195596 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:05:07.205442 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:05:07.205495 kernel: GPT:17805311 != 80003071 Sep 13 00:05:07.205513 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:05:07.209384 kernel: GPT:17805311 != 80003071 Sep 13 00:05:07.209416 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:05:07.212131 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:07.216647 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:05:07.244192 kernel: usbcore: registered new interface driver usbhid Sep 13 00:05:07.244266 kernel: usbhid: USB HID core driver Sep 13 00:05:07.251972 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:05:07.252267 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:05:07.270640 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 00:05:07.279703 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:05:07.280023 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:05:07.286010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:05:07.296020 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Sep 13 00:05:07.296044 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (471) Sep 13 00:05:07.311911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:05:07.324169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:05:07.325360 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:05:07.334091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:05:07.341750 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:05:07.350632 disk-uuid[577]: Primary Header is updated. Sep 13 00:05:07.350632 disk-uuid[577]: Secondary Entries is updated. Sep 13 00:05:07.350632 disk-uuid[577]: Secondary Header is updated. Sep 13 00:05:07.359659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:07.366596 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:07.374627 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:08.379658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:05:08.380210 disk-uuid[579]: The operation has completed successfully. Sep 13 00:05:08.452366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:05:08.452549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:05:08.473788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:05:08.481121 sh[598]: Success Sep 13 00:05:08.501668 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:05:08.559148 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:05:08.573739 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:05:08.576042 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:05:08.601638 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:05:08.601699 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:08.604890 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:05:08.608225 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:05:08.610773 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:05:08.624645 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:05:08.627519 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:05:08.630140 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:05:08.637848 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:05:08.641817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:05:08.665846 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:08.665910 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:08.665930 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:08.674657 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:08.674709 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:08.692624 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:08.692729 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:05:08.702461 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:05:08.709818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:05:08.789250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:08.807798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:08.825110 ignition[714]: Ignition 2.19.0 Sep 13 00:05:08.826158 ignition[714]: Stage: fetch-offline Sep 13 00:05:08.826255 ignition[714]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:08.826273 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:08.826439 ignition[714]: parsed url from cmdline: "" Sep 13 00:05:08.830447 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:08.826443 ignition[714]: no config URL provided Sep 13 00:05:08.826450 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:08.826460 ignition[714]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:08.826466 ignition[714]: failed to fetch config: resource requires networking Sep 13 00:05:08.826880 ignition[714]: Ignition finished successfully Sep 13 00:05:08.835338 systemd-networkd[782]: lo: Link UP Sep 13 00:05:08.835341 systemd-networkd[782]: lo: Gained carrier Sep 13 00:05:08.836865 systemd-networkd[782]: Enumeration completed Sep 13 00:05:08.837026 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:08.837291 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:08.837294 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:08.837960 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:08.837962 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:08.838377 systemd-networkd[782]: eth0: Link UP Sep 13 00:05:08.838380 systemd-networkd[782]: eth0: Gained carrier Sep 13 00:05:08.838385 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:08.839294 systemd[1]: Reached target network.target - Network. Sep 13 00:05:08.839787 systemd-networkd[782]: eth1: Link UP Sep 13 00:05:08.839791 systemd-networkd[782]: eth1: Gained carrier Sep 13 00:05:08.839797 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:08.848734 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:05:08.859124 ignition[786]: Ignition 2.19.0 Sep 13 00:05:08.859134 ignition[786]: Stage: fetch Sep 13 00:05:08.859265 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:08.859273 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:08.859330 ignition[786]: parsed url from cmdline: "" Sep 13 00:05:08.859332 ignition[786]: no config URL provided Sep 13 00:05:08.859336 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:08.859341 ignition[786]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:08.859355 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:05:08.859466 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:05:08.868636 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:05:08.911663 systemd-networkd[782]: eth0: DHCPv4 address 157.180.30.217/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:05:09.060536 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:05:09.066419 ignition[786]: GET result: OK Sep 13 00:05:09.067512 ignition[786]: parsing config with SHA512: 0990b7103f5dedb37db61cd5a4a74445fceb92e4d6eeda632a12e7a27b021c20f326076d287437d51ca538491e8465e91cd3e7dc7b53117442e3d6ec8145965d Sep 13 00:05:09.073408 unknown[786]: fetched base config from "system" Sep 13 00:05:09.074060 ignition[786]: fetch: fetch complete Sep 13 00:05:09.073425 unknown[786]: fetched base config from "system" Sep 13 00:05:09.074069 ignition[786]: fetch: fetch passed Sep 13 00:05:09.073433 unknown[786]: fetched user config from "hetzner" Sep 13 00:05:09.074159 ignition[786]: Ignition finished successfully Sep 13 00:05:09.079293 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:05:09.087822 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:05:09.110431 ignition[793]: Ignition 2.19.0 Sep 13 00:05:09.110449 ignition[793]: Stage: kargs Sep 13 00:05:09.110796 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:09.115085 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:05:09.110812 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:09.112525 ignition[793]: kargs: kargs passed Sep 13 00:05:09.112681 ignition[793]: Ignition finished successfully Sep 13 00:05:09.124882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:05:09.141222 ignition[799]: Ignition 2.19.0 Sep 13 00:05:09.141243 ignition[799]: Stage: disks Sep 13 00:05:09.141511 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:09.147932 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:05:09.141527 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:09.153158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:09.143017 ignition[799]: disks: disks passed Sep 13 00:05:09.154522 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:05:09.143084 ignition[799]: Ignition finished successfully Sep 13 00:05:09.156419 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:09.158490 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:09.160551 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:09.168888 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:05:09.183661 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:05:09.186949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:05:09.195694 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:05:09.270605 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:05:09.270249 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:05:09.271248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:09.279709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:09.282131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:05:09.284743 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:05:09.288147 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:05:09.298393 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Sep 13 00:05:09.298420 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:09.298431 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:09.298441 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:09.298450 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:09.289130 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:09.304066 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:09.304290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:05:09.307068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:09.318150 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:05:09.355806 coreos-metadata[817]: Sep 13 00:05:09.355 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:05:09.357192 coreos-metadata[817]: Sep 13 00:05:09.357 INFO Fetch successful Sep 13 00:05:09.358753 coreos-metadata[817]: Sep 13 00:05:09.357 INFO wrote hostname ci-4081-3-5-n-c4418ce715 to /sysroot/etc/hostname Sep 13 00:05:09.359417 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:05:09.361882 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:05:09.365545 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:05:09.368744 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:05:09.371700 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:05:09.442135 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:09.447664 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:05:09.451736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:05:09.454822 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:09.481187 ignition[931]: INFO : Ignition 2.19.0 Sep 13 00:05:09.481187 ignition[931]: INFO : Stage: mount Sep 13 00:05:09.481187 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:09.481187 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:09.485649 ignition[931]: INFO : mount: mount passed Sep 13 00:05:09.485649 ignition[931]: INFO : Ignition finished successfully Sep 13 00:05:09.483099 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:05:09.488706 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:05:09.494425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:05:09.597291 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:05:09.603742 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:09.614614 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Sep 13 00:05:09.618896 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:09.618946 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:09.620621 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:05:09.625885 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:05:09.625939 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:05:09.629767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:09.652747 ignition[960]: INFO : Ignition 2.19.0 Sep 13 00:05:09.653605 ignition[960]: INFO : Stage: files Sep 13 00:05:09.654316 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:09.655638 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:09.656626 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:05:09.657417 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:05:09.657417 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:05:09.660833 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:05:09.661640 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:05:09.662628 unknown[960]: wrote ssh authorized keys file for user: core Sep 13 00:05:09.663418 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:05:09.664324 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:05:09.664324 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:09.861914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:05:09.963923 systemd-networkd[782]: eth1: Gained IPv6LL Sep 13 00:05:10.283744 systemd-networkd[782]: eth0: Gained IPv6LL Sep 13 00:05:10.300100 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:05:10.300100 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:10.300100 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:10.655097 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:05:11.101085 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:11.101085 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:05:11.103810 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:05:11.622483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:05:12.940250 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:05:12.940250 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:12.943736 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:12.943736 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:12.943736 ignition[960]: INFO : files: files passed Sep 13 00:05:12.943736 ignition[960]: INFO : Ignition finished successfully Sep 13 00:05:12.943549 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:05:12.951762 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:05:12.954909 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:05:12.962069 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:05:12.971017 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:12.971017 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:12.962199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:05:12.976365 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:12.973025 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:12.974414 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:05:12.991789 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:05:13.011318 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:05:13.011477 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:05:13.013309 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:05:13.014270 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:05:13.015721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:05:13.021743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:05:13.034724 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:13.039706 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:05:13.047050 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:13.048196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:13.049589 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:05:13.050794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:05:13.050947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:13.052350 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:05:13.053209 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:05:13.054436 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:05:13.055674 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:13.056863 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:13.058151 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:05:13.059415 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:13.060810 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:05:13.062045 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:05:13.063319 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:05:13.064475 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:05:13.064659 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:13.066013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:13.066896 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:13.068010 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:05:13.068641 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:13.070054 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:05:13.070158 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:13.071772 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:05:13.071886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:13.072719 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:05:13.072841 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:05:13.073995 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:05:13.074089 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:05:13.086013 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:05:13.086605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:05:13.086770 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:13.089751 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:05:13.090300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:05:13.090451 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:13.091236 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:05:13.091671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:13.103402 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:05:13.103483 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:05:13.112176 ignition[1014]: INFO : Ignition 2.19.0 Sep 13 00:05:13.112176 ignition[1014]: INFO : Stage: umount Sep 13 00:05:13.112176 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:13.112176 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:05:13.116472 ignition[1014]: INFO : umount: umount passed Sep 13 00:05:13.116472 ignition[1014]: INFO : Ignition finished successfully Sep 13 00:05:13.117090 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:05:13.117896 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:05:13.118440 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:05:13.120630 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:05:13.120712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:05:13.121932 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:05:13.122009 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:05:13.123081 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:05:13.123114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:05:13.124147 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:05:13.124177 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:05:13.125170 systemd[1]: Stopped target network.target - Network. Sep 13 00:05:13.126159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:05:13.126196 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:13.127276 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:05:13.128295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:05:13.130633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:13.131401 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:05:13.132387 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:05:13.133540 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:05:13.133595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:13.134801 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:05:13.134827 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:13.135780 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:05:13.135813 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:05:13.136778 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:05:13.136809 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:13.137773 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:05:13.137803 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:13.138876 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:05:13.139896 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:05:13.142599 systemd-networkd[782]: eth0: DHCPv6 lease lost Sep 13 00:05:13.146610 systemd-networkd[782]: eth1: DHCPv6 lease lost Sep 13 00:05:13.147546 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:05:13.147706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:05:13.149545 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:05:13.149718 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:05:13.152507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:05:13.152584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:13.157643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:05:13.158076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:05:13.158118 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:13.158751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:05:13.158788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:13.159643 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:05:13.159674 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:13.160689 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:05:13.160719 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:13.161916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:13.172861 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:05:13.172958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:05:13.176845 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:05:13.176954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:13.178284 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:05:13.178331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:13.179033 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:05:13.179058 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:13.180034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:05:13.180072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:13.181516 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:05:13.181572 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:13.182622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:13.182660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:13.192747 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:05:13.193247 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:05:13.193297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:13.193831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:13.193867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:13.198229 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:05:13.198304 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:05:13.200409 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:05:13.204792 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:05:13.213445 systemd[1]: Switching root. Sep 13 00:05:13.239036 systemd-journald[188]: Journal stopped Sep 13 00:05:14.059514 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Sep 13 00:05:14.059612 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:05:14.059625 kernel: SELinux: policy capability open_perms=1 Sep 13 00:05:14.059635 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:05:14.059643 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:05:14.059652 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:05:14.059659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:05:14.059669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:05:14.059676 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:05:14.059687 kernel: audit: type=1403 audit(1757721913.396:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:05:14.059696 systemd[1]: Successfully loaded SELinux policy in 37.565ms. Sep 13 00:05:14.059709 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.600ms. Sep 13 00:05:14.059718 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:14.059726 systemd[1]: Detected virtualization kvm. Sep 13 00:05:14.059734 systemd[1]: Detected architecture x86-64. Sep 13 00:05:14.059743 systemd[1]: Detected first boot. Sep 13 00:05:14.059752 systemd[1]: Hostname set to . Sep 13 00:05:14.059760 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:14.059768 zram_generator::config[1057]: No configuration found. Sep 13 00:05:14.059776 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:05:14.059784 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:05:14.059792 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:05:14.059800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:14.059808 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:05:14.059818 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:05:14.059826 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:05:14.059833 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:05:14.059842 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:05:14.059850 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:05:14.059859 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:05:14.059867 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:05:14.059875 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:14.059883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:14.059892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:05:14.059900 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:05:14.059909 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:05:14.059917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:14.059924 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:05:14.059932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:14.059940 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:05:14.059949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:05:14.059957 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:14.059965 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:05:14.059973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:14.059983 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:14.059991 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:14.059999 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:14.060007 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:05:14.060016 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:05:14.060025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:14.060034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:14.060042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:14.060049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:05:14.060057 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:05:14.060065 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:05:14.060073 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:05:14.060082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:14.060093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:05:14.060102 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:05:14.060110 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:05:14.060118 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:05:14.060126 systemd[1]: Reached target machines.target - Containers. Sep 13 00:05:14.060136 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:05:14.060144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:14.060152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:14.060160 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:05:14.060168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:14.060176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:14.060184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:14.060191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:05:14.060199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:14.060211 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:14.060219 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:05:14.060228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:05:14.060236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:05:14.060244 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:05:14.060252 kernel: fuse: init (API version 7.39) Sep 13 00:05:14.060261 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:14.060269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:14.060277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:05:14.060287 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:05:14.060294 kernel: ACPI: bus type drm_connector registered Sep 13 00:05:14.060302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:14.060310 kernel: loop: module loaded Sep 13 00:05:14.060317 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:05:14.060325 systemd[1]: Stopped verity-setup.service. Sep 13 00:05:14.060333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:14.060342 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:05:14.060350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:05:14.060358 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:05:14.060380 systemd-journald[1140]: Collecting audit messages is disabled. Sep 13 00:05:14.060398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:05:14.060407 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:05:14.060416 systemd-journald[1140]: Journal started Sep 13 00:05:14.060434 systemd-journald[1140]: Runtime Journal (/run/log/journal/180da8fa662c48a8ae65f72b7fda0718) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:05:13.796026 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:05:14.061640 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:13.814644 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:05:13.815020 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:05:14.062815 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:05:14.063385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:05:14.064115 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:14.064788 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:05:14.064900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:05:14.065514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:14.065683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:14.066418 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:14.066526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:14.067341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:14.067583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:14.068253 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:05:14.068414 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:05:14.069160 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:14.069306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:14.070019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:14.070834 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:05:14.071552 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:05:14.078372 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:05:14.083877 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:05:14.087637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:05:14.089256 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:14.089283 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:14.090390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:05:14.095401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:05:14.101883 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:05:14.103293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:14.109683 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:05:14.118802 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:05:14.119648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:14.124712 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:05:14.126052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:14.135741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:14.142878 systemd-journald[1140]: Time spent on flushing to /var/log/journal/180da8fa662c48a8ae65f72b7fda0718 is 60.112ms for 1129 entries. Sep 13 00:05:14.142878 systemd-journald[1140]: System Journal (/var/log/journal/180da8fa662c48a8ae65f72b7fda0718) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:05:14.240930 systemd-journald[1140]: Received client request to flush runtime journal. Sep 13 00:05:14.241022 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:05:14.241043 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:05:14.241056 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:05:14.145716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:05:14.148128 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:05:14.153579 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:05:14.155462 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:05:14.156152 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:05:14.167803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:14.178665 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:05:14.195615 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:05:14.196719 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:05:14.201990 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:05:14.213282 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:05:14.223354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:14.246492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:05:14.252860 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:05:14.260798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:14.262334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:05:14.263289 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:05:14.274774 kernel: loop2: detected capacity change from 0 to 229808 Sep 13 00:05:14.295029 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 13 00:05:14.295043 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 13 00:05:14.306832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:14.327586 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:05:14.380175 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:05:14.386590 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:05:14.412621 kernel: loop6: detected capacity change from 0 to 229808 Sep 13 00:05:14.434588 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 00:05:14.451440 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:05:14.452072 (sd-merge)[1202]: Merged extensions into '/usr'. Sep 13 00:05:14.457820 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:05:14.457900 systemd[1]: Reloading... Sep 13 00:05:14.516375 zram_generator::config[1226]: No configuration found. Sep 13 00:05:14.573030 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:05:14.615877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:14.653261 systemd[1]: Reloading finished in 195 ms. Sep 13 00:05:14.678067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:05:14.678790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:05:14.679498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:05:14.686694 systemd[1]: Starting ensure-sysext.service... Sep 13 00:05:14.687972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:14.691676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:14.696277 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:05:14.696292 systemd[1]: Reloading... Sep 13 00:05:14.711425 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:05:14.712010 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:05:14.712635 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:05:14.712825 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 13 00:05:14.712868 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 13 00:05:14.714217 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Sep 13 00:05:14.717445 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:14.717452 systemd-tmpfiles[1273]: Skipping /boot Sep 13 00:05:14.726521 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:14.726653 systemd-tmpfiles[1273]: Skipping /boot Sep 13 00:05:14.748625 zram_generator::config[1298]: No configuration found. Sep 13 00:05:14.839582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:05:14.842578 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1321) Sep 13 00:05:14.846580 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:05:14.892350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:14.908603 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:05:14.926994 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:05:14.927203 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:05:14.927330 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:05:14.943640 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:05:14.959268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:05:14.959939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:05:14.960010 systemd[1]: Reloading finished in 263 ms. Sep 13 00:05:14.968165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:14.968916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:14.982577 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:05:14.992579 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 13 00:05:14.996263 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 13 00:05:14.996580 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:05:14.999342 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:05:14.999408 kernel: [drm] features: -context_init Sep 13 00:05:15.001582 kernel: [drm] number of scanouts: 1 Sep 13 00:05:15.001615 kernel: [drm] number of cap sets: 0 Sep 13 00:05:15.004602 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:05:15.011579 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:05:15.016318 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:05:15.028581 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:05:15.032471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:15.037770 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:15.043702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:05:15.044036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:15.046118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:15.048526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:15.050744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:15.053771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:15.053909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:15.055789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:05:15.060861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:05:15.063295 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:15.068246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:15.077783 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:05:15.078757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:15.079969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:15.083201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:15.083388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:15.085316 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:15.085411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:15.090158 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:15.090250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:15.092174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:05:15.096949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:15.097050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:15.099898 systemd[1]: Finished ensure-sysext.service. Sep 13 00:05:15.109925 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:05:15.114266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:15.114318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:15.120284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:05:15.124345 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:05:15.132289 augenrules[1420]: No rules Sep 13 00:05:15.134577 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:15.143022 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:05:15.154097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:05:15.159559 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:05:15.168762 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:05:15.170551 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:05:15.171126 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:05:15.187616 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:15.219785 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:05:15.220438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:15.227714 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:05:15.234847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:15.240606 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:15.243875 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:05:15.244873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:05:15.261951 systemd-networkd[1395]: lo: Link UP Sep 13 00:05:15.262171 systemd-networkd[1395]: lo: Gained carrier Sep 13 00:05:15.263308 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:05:15.265097 systemd-timesyncd[1417]: No network connectivity, watching for changes. Sep 13 00:05:15.265470 systemd-networkd[1395]: Enumeration completed Sep 13 00:05:15.265901 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:15.265972 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:15.271309 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:15.271738 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:05:15.278388 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:15.278451 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:15.278939 systemd-networkd[1395]: eth0: Link UP Sep 13 00:05:15.278944 systemd-networkd[1395]: eth0: Gained carrier Sep 13 00:05:15.278954 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:15.280695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:05:15.281262 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:05:15.282492 systemd-resolved[1396]: Positive Trust Anchors: Sep 13 00:05:15.283048 systemd-networkd[1395]: eth1: Link UP Sep 13 00:05:15.283094 systemd-networkd[1395]: eth1: Gained carrier Sep 13 00:05:15.283138 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:15.283670 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:15.283702 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:15.289117 systemd-resolved[1396]: Using system hostname 'ci-4081-3-5-n-c4418ce715'. Sep 13 00:05:15.290917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:15.291970 systemd[1]: Reached target network.target - Network. Sep 13 00:05:15.292274 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:15.292596 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:15.292960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:05:15.293275 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:05:15.293741 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:05:15.294117 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:05:15.294429 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:05:15.294760 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:05:15.294780 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:15.295083 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:15.301815 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:05:15.303082 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:05:15.308204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:05:15.311373 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:05:15.311757 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:15.312047 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:15.312352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:15.312369 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:15.313326 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:05:15.318876 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:05:15.320628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:05:15.322665 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:05:15.323039 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:05:15.327111 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 13 00:05:15.329679 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:05:15.339733 systemd-networkd[1395]: eth0: DHCPv4 address 157.180.30.217/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:05:15.340244 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:05:15.341051 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 13 00:05:15.343746 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:05:15.349262 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:05:15.352439 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:05:15.362800 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:05:15.368903 jq[1455]: false Sep 13 00:05:15.371734 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:05:15.383999 extend-filesystems[1456]: Found loop4 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found loop5 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found loop6 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found loop7 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda1 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda2 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda3 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found usr Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda4 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda6 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda7 Sep 13 00:05:15.394074 extend-filesystems[1456]: Found sda9 Sep 13 00:05:15.394074 extend-filesystems[1456]: Checking size of /dev/sda9 Sep 13 00:05:15.434265 coreos-metadata[1453]: Sep 13 00:05:15.392 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:05:15.434265 coreos-metadata[1453]: Sep 13 00:05:15.400 INFO Fetch successful Sep 13 00:05:15.434265 coreos-metadata[1453]: Sep 13 00:05:15.400 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:05:15.434265 coreos-metadata[1453]: Sep 13 00:05:15.402 INFO Fetch successful Sep 13 00:05:15.385696 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:05:15.434518 extend-filesystems[1456]: Resized partition /dev/sda9 Sep 13 00:05:15.386360 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:05:15.386754 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:05:15.389034 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:05:15.402445 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:05:15.440885 jq[1477]: true Sep 13 00:05:15.431672 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:05:15.431796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:05:15.432002 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:05:15.432103 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:05:15.444861 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:05:15.444944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:05:15.445062 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:05:15.457349 update_engine[1474]: I20250913 00:05:15.456732 1474 main.cc:92] Flatcar Update Engine starting Sep 13 00:05:15.466734 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:05:15.466775 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1317) Sep 13 00:05:15.469129 systemd-logind[1471]: New seat seat0. Sep 13 00:05:15.471146 systemd-logind[1471]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:05:15.472133 dbus-daemon[1454]: [system] SELinux support is enabled Sep 13 00:05:15.472318 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:05:15.473393 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:05:15.475124 update_engine[1474]: I20250913 00:05:15.475056 1474 update_check_scheduler.cc:74] Next update check in 7m51s Sep 13 00:05:15.488972 (ntainerd)[1490]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:05:15.489160 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:05:15.494062 jq[1489]: true Sep 13 00:05:15.495217 tar[1486]: linux-amd64/LICENSE Sep 13 00:05:15.499692 tar[1486]: linux-amd64/helm Sep 13 00:05:15.505162 dbus-daemon[1454]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:05:15.506011 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:05:15.516001 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:05:15.516130 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:05:15.517353 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:05:15.517445 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:05:15.529784 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:05:15.574883 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:05:15.583733 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:05:15.620273 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:05:15.681171 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:15.682958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:05:15.693575 systemd[1]: Starting sshkeys.service... Sep 13 00:05:15.715641 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:05:15.721786 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:05:15.758601 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:05:15.779336 coreos-metadata[1534]: Sep 13 00:05:15.765 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:05:15.779336 coreos-metadata[1534]: Sep 13 00:05:15.766 INFO Fetch successful Sep 13 00:05:15.781333 containerd[1490]: time="2025-09-13T00:05:15.781163596Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:05:15.782095 unknown[1534]: wrote ssh authorized keys file for user: core Sep 13 00:05:15.786571 extend-filesystems[1484]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:05:15.786571 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:05:15.786571 extend-filesystems[1484]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:05:15.794267 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Sep 13 00:05:15.794267 extend-filesystems[1456]: Found sr0 Sep 13 00:05:15.787042 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:05:15.787190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:05:15.817416 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:15.818192 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:05:15.825298 systemd[1]: Finished sshkeys.service. Sep 13 00:05:15.842801 containerd[1490]: time="2025-09-13T00:05:15.842585253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.845091 containerd[1490]: time="2025-09-13T00:05:15.845066096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:15.845167 containerd[1490]: time="2025-09-13T00:05:15.845154011Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:05:15.845452 containerd[1490]: time="2025-09-13T00:05:15.845204195Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:05:15.845452 containerd[1490]: time="2025-09-13T00:05:15.845325663Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:05:15.845452 containerd[1490]: time="2025-09-13T00:05:15.845342675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.845452 containerd[1490]: time="2025-09-13T00:05:15.845394211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:15.845452 containerd[1490]: time="2025-09-13T00:05:15.845404982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846138 containerd[1490]: time="2025-09-13T00:05:15.846121696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846581 containerd[1490]: time="2025-09-13T00:05:15.846174225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846581 containerd[1490]: time="2025-09-13T00:05:15.846189043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846581 containerd[1490]: time="2025-09-13T00:05:15.846197378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846581 containerd[1490]: time="2025-09-13T00:05:15.846258823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.846581 containerd[1490]: time="2025-09-13T00:05:15.846415888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:15.847887 containerd[1490]: time="2025-09-13T00:05:15.847648550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:15.847887 containerd[1490]: time="2025-09-13T00:05:15.847665481Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:05:15.847887 containerd[1490]: time="2025-09-13T00:05:15.847728109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:05:15.847887 containerd[1490]: time="2025-09-13T00:05:15.847777702Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853401259Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853434170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853448928Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853461051Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853472843Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853593830Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853754701Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853825875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853839020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853854248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853865710Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853875318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853884075Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854586 containerd[1490]: time="2025-09-13T00:05:15.853897550Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853907639Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853917708Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853927055Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853935852Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853951260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853961039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853970607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853980635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853989622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.853999501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.854008317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.854017765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.854027363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854785 containerd[1490]: time="2025-09-13T00:05:15.854038334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854046900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854056227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854065896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854076435Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854092145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854100470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854108916Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854139183Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854151646Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854159271Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854168628Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854175221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854183887Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:05:15.854964 containerd[1490]: time="2025-09-13T00:05:15.854194657Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:05:15.855137 containerd[1490]: time="2025-09-13T00:05:15.854202161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:05:15.855153 containerd[1490]: time="2025-09-13T00:05:15.854392067Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:05:15.855153 containerd[1490]: time="2025-09-13T00:05:15.854436941Z" level=info msg="Connect containerd service" Sep 13 00:05:15.855153 containerd[1490]: time="2025-09-13T00:05:15.854464352Z" level=info msg="using legacy CRI server" Sep 13 00:05:15.855153 containerd[1490]: time="2025-09-13T00:05:15.854469212Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:05:15.855153 containerd[1490]: time="2025-09-13T00:05:15.854545244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859190617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859278001Z" level=info msg="Start subscribing containerd event" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859315130Z" level=info msg="Start recovering state" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859357249Z" level=info msg="Start event monitor" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859366146Z" level=info msg="Start snapshots syncer" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859372187Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:05:15.859759 containerd[1490]: time="2025-09-13T00:05:15.859377928Z" level=info msg="Start streaming server" Sep 13 00:05:15.860007 containerd[1490]: time="2025-09-13T00:05:15.859988623Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:05:15.860112 containerd[1490]: time="2025-09-13T00:05:15.860099572Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:05:15.860313 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:05:15.882307 containerd[1490]: time="2025-09-13T00:05:15.882265504Z" level=info msg="containerd successfully booted in 0.118967s" Sep 13 00:05:15.957466 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:05:15.975989 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:05:15.985745 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:05:15.992555 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:05:15.992820 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:05:16.007155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:05:16.016999 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:05:16.032765 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:05:16.034999 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:05:16.035408 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:05:16.120287 tar[1486]: linux-amd64/README.md Sep 13 00:05:16.129232 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:05:16.747823 systemd-networkd[1395]: eth1: Gained IPv6LL Sep 13 00:05:16.748390 systemd-networkd[1395]: eth0: Gained IPv6LL Sep 13 00:05:16.748552 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 13 00:05:16.748836 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 13 00:05:16.751735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:05:16.753286 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:05:16.762726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:16.765922 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:05:16.804542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:05:17.965882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:17.967525 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:05:17.976542 systemd[1]: Startup finished in 1.423s (kernel) + 7.659s (initrd) + 4.615s (userspace) = 13.698s. Sep 13 00:05:17.981932 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:18.697615 kubelet[1582]: E0913 00:05:18.697120 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:18.699765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:18.700000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:18.700418 systemd[1]: kubelet.service: Consumed 1.264s CPU time. Sep 13 00:05:22.868138 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:05:22.873805 systemd[1]: Started sshd@0-157.180.30.217:22-147.75.109.163:48592.service - OpenSSH per-connection server daemon (147.75.109.163:48592). Sep 13 00:05:23.953618 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 48592 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:23.955049 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:23.968043 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:05:23.981804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:05:23.986504 systemd-logind[1471]: New session 1 of user core. Sep 13 00:05:23.997545 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:05:24.003922 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:05:24.018994 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:24.155698 systemd[1597]: Queued start job for default target default.target. Sep 13 00:05:24.166273 systemd[1597]: Created slice app.slice - User Application Slice. Sep 13 00:05:24.166294 systemd[1597]: Reached target paths.target - Paths. Sep 13 00:05:24.166305 systemd[1597]: Reached target timers.target - Timers. Sep 13 00:05:24.167286 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:05:24.176096 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:05:24.176133 systemd[1597]: Reached target sockets.target - Sockets. Sep 13 00:05:24.176144 systemd[1597]: Reached target basic.target - Basic System. Sep 13 00:05:24.176171 systemd[1597]: Reached target default.target - Main User Target. Sep 13 00:05:24.176190 systemd[1597]: Startup finished in 149ms. Sep 13 00:05:24.176414 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:05:24.183685 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:05:24.904907 systemd[1]: Started sshd@1-157.180.30.217:22-147.75.109.163:48606.service - OpenSSH per-connection server daemon (147.75.109.163:48606). Sep 13 00:05:25.868993 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 48606 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:25.870964 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:25.878705 systemd-logind[1471]: New session 2 of user core. Sep 13 00:05:25.889778 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:05:26.545900 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:26.549600 systemd[1]: sshd@1-157.180.30.217:22-147.75.109.163:48606.service: Deactivated successfully. Sep 13 00:05:26.552146 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:05:26.554129 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:05:26.556069 systemd-logind[1471]: Removed session 2. Sep 13 00:05:26.722898 systemd[1]: Started sshd@2-157.180.30.217:22-147.75.109.163:48614.service - OpenSSH per-connection server daemon (147.75.109.163:48614). Sep 13 00:05:27.695670 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 48614 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:27.698088 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:27.703220 systemd-logind[1471]: New session 3 of user core. Sep 13 00:05:27.712751 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:05:28.367603 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.369774 systemd[1]: sshd@2-157.180.30.217:22-147.75.109.163:48614.service: Deactivated successfully. Sep 13 00:05:28.371105 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:05:28.372000 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:05:28.372910 systemd-logind[1471]: Removed session 3. Sep 13 00:05:28.538984 systemd[1]: Started sshd@3-157.180.30.217:22-147.75.109.163:48628.service - OpenSSH per-connection server daemon (147.75.109.163:48628). Sep 13 00:05:28.751150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:28.756820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:28.885712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:28.888697 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:28.929277 kubelet[1631]: E0913 00:05:28.929212 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:28.934372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:28.934598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:29.524010 sshd[1622]: Accepted publickey for core from 147.75.109.163 port 48628 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:29.525402 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:29.530391 systemd-logind[1471]: New session 4 of user core. Sep 13 00:05:29.535758 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:05:30.197162 sshd[1622]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:30.199612 systemd[1]: sshd@3-157.180.30.217:22-147.75.109.163:48628.service: Deactivated successfully. Sep 13 00:05:30.201388 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:05:30.202107 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:05:30.202855 systemd-logind[1471]: Removed session 4. Sep 13 00:05:30.366528 systemd[1]: Started sshd@4-157.180.30.217:22-147.75.109.163:47396.service - OpenSSH per-connection server daemon (147.75.109.163:47396). Sep 13 00:05:31.332648 sshd[1644]: Accepted publickey for core from 147.75.109.163 port 47396 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:31.333963 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:31.338311 systemd-logind[1471]: New session 5 of user core. Sep 13 00:05:31.344705 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:05:31.862239 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:05:31.863003 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:31.882318 sudo[1647]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:32.040819 sshd[1644]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:32.045104 systemd[1]: sshd@4-157.180.30.217:22-147.75.109.163:47396.service: Deactivated successfully. Sep 13 00:05:32.047558 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:05:32.049588 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:05:32.051210 systemd-logind[1471]: Removed session 5. Sep 13 00:05:32.241988 systemd[1]: Started sshd@5-157.180.30.217:22-147.75.109.163:47404.service - OpenSSH per-connection server daemon (147.75.109.163:47404). Sep 13 00:05:33.321743 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 47404 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:33.323465 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:33.329364 systemd-logind[1471]: New session 6 of user core. Sep 13 00:05:33.334791 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:05:33.889977 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:05:33.890246 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:33.893181 sudo[1656]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:33.897828 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:05:33.898072 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:33.914954 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:33.915835 auditctl[1659]: No rules Sep 13 00:05:33.916223 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:05:33.916383 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:33.918553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:33.939415 augenrules[1677]: No rules Sep 13 00:05:33.940412 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:33.941435 sudo[1655]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:34.116195 sshd[1652]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:34.118641 systemd[1]: sshd@5-157.180.30.217:22-147.75.109.163:47404.service: Deactivated successfully. Sep 13 00:05:34.120057 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:05:34.121036 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:05:34.121983 systemd-logind[1471]: Removed session 6. Sep 13 00:05:34.298716 systemd[1]: Started sshd@6-157.180.30.217:22-147.75.109.163:47406.service - OpenSSH per-connection server daemon (147.75.109.163:47406). Sep 13 00:05:35.368545 sshd[1685]: Accepted publickey for core from 147.75.109.163 port 47406 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:05:35.369756 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:35.374364 systemd-logind[1471]: New session 7 of user core. Sep 13 00:05:35.383740 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:05:35.937929 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:05:35.938191 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:36.202889 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:05:36.203289 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:05:36.468281 dockerd[1704]: time="2025-09-13T00:05:36.467832043Z" level=info msg="Starting up" Sep 13 00:05:36.558665 dockerd[1704]: time="2025-09-13T00:05:36.558608245Z" level=info msg="Loading containers: start." Sep 13 00:05:36.651894 kernel: Initializing XFRM netlink socket Sep 13 00:05:36.680827 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Sep 13 00:05:37.760267 systemd-timesyncd[1417]: Contacted time server 129.70.132.36:123 (2.flatcar.pool.ntp.org). Sep 13 00:05:37.760321 systemd-timesyncd[1417]: Initial clock synchronization to Sat 2025-09-13 00:05:37.759872 UTC. Sep 13 00:05:37.760400 systemd-resolved[1396]: Clock change detected. Flushing caches. Sep 13 00:05:37.776422 systemd-networkd[1395]: docker0: Link UP Sep 13 00:05:37.796914 dockerd[1704]: time="2025-09-13T00:05:37.796856686Z" level=info msg="Loading containers: done." Sep 13 00:05:37.810798 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2678095301-merged.mount: Deactivated successfully. Sep 13 00:05:37.814216 dockerd[1704]: time="2025-09-13T00:05:37.814164877Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:05:37.814312 dockerd[1704]: time="2025-09-13T00:05:37.814269042Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:05:37.814411 dockerd[1704]: time="2025-09-13T00:05:37.814367276Z" level=info msg="Daemon has completed initialization" Sep 13 00:05:37.842966 dockerd[1704]: time="2025-09-13T00:05:37.842895011Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:05:37.843509 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:05:38.969809 containerd[1490]: time="2025-09-13T00:05:38.969763935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:05:39.482211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883407934.mount: Deactivated successfully. Sep 13 00:05:40.046950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:05:40.053031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:40.176001 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:40.176036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:40.206781 kubelet[1904]: E0913 00:05:40.206719 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:40.209807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:40.210006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:41.109529 containerd[1490]: time="2025-09-13T00:05:41.109453043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.111020 containerd[1490]: time="2025-09-13T00:05:41.110786685Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Sep 13 00:05:41.113690 containerd[1490]: time="2025-09-13T00:05:41.112282310Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.115644 containerd[1490]: time="2025-09-13T00:05:41.115617024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.116636 containerd[1490]: time="2025-09-13T00:05:41.116604406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.146800836s" Sep 13 00:05:41.116705 containerd[1490]: time="2025-09-13T00:05:41.116637939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:05:41.117189 containerd[1490]: time="2025-09-13T00:05:41.117165819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:05:42.807263 containerd[1490]: time="2025-09-13T00:05:42.807180298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.808333 containerd[1490]: time="2025-09-13T00:05:42.808220339Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Sep 13 00:05:42.809438 containerd[1490]: time="2025-09-13T00:05:42.809125476Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.811559 containerd[1490]: time="2025-09-13T00:05:42.811529866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.812525 containerd[1490]: time="2025-09-13T00:05:42.812494745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.695301125s" Sep 13 00:05:42.812593 containerd[1490]: time="2025-09-13T00:05:42.812580075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:05:42.813219 containerd[1490]: time="2025-09-13T00:05:42.813082237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:05:44.111798 containerd[1490]: time="2025-09-13T00:05:44.111734744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.112909 containerd[1490]: time="2025-09-13T00:05:44.112702128Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Sep 13 00:05:44.114005 containerd[1490]: time="2025-09-13T00:05:44.113629156Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.116029 containerd[1490]: time="2025-09-13T00:05:44.116002497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.116919 containerd[1490]: time="2025-09-13T00:05:44.116884933Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.303607699s" Sep 13 00:05:44.116989 containerd[1490]: time="2025-09-13T00:05:44.116976104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:05:44.117519 containerd[1490]: time="2025-09-13T00:05:44.117491060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:05:45.131084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348769814.mount: Deactivated successfully. Sep 13 00:05:45.459955 containerd[1490]: time="2025-09-13T00:05:45.459821263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.461139 containerd[1490]: time="2025-09-13T00:05:45.461096595Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Sep 13 00:05:45.463083 containerd[1490]: time="2025-09-13T00:05:45.462168285Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.464995 containerd[1490]: time="2025-09-13T00:05:45.464004238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:45.464995 containerd[1490]: time="2025-09-13T00:05:45.464586099Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.347067679s" Sep 13 00:05:45.464995 containerd[1490]: time="2025-09-13T00:05:45.464610015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:05:45.465293 containerd[1490]: time="2025-09-13T00:05:45.465261296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:05:45.942024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895825585.mount: Deactivated successfully. Sep 13 00:05:46.744685 containerd[1490]: time="2025-09-13T00:05:46.744612412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.745697 containerd[1490]: time="2025-09-13T00:05:46.745491240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Sep 13 00:05:46.749707 containerd[1490]: time="2025-09-13T00:05:46.749224331Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.751695 containerd[1490]: time="2025-09-13T00:05:46.751556105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.752992 containerd[1490]: time="2025-09-13T00:05:46.752337421Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.287051077s" Sep 13 00:05:46.752992 containerd[1490]: time="2025-09-13T00:05:46.752364792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:05:46.752992 containerd[1490]: time="2025-09-13T00:05:46.752817000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:05:47.203710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089602364.mount: Deactivated successfully. Sep 13 00:05:47.212722 containerd[1490]: time="2025-09-13T00:05:47.212600388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:47.213880 containerd[1490]: time="2025-09-13T00:05:47.213735858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 13 00:05:47.216704 containerd[1490]: time="2025-09-13T00:05:47.214970603Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:47.218360 containerd[1490]: time="2025-09-13T00:05:47.218294427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:47.219629 containerd[1490]: time="2025-09-13T00:05:47.219589777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.745746ms" Sep 13 00:05:47.219819 containerd[1490]: time="2025-09-13T00:05:47.219791716Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:05:47.220712 containerd[1490]: time="2025-09-13T00:05:47.220628495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:05:47.647783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982894870.mount: Deactivated successfully. Sep 13 00:05:49.304238 containerd[1490]: time="2025-09-13T00:05:49.304170349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.305223 containerd[1490]: time="2025-09-13T00:05:49.305159985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Sep 13 00:05:49.305692 containerd[1490]: time="2025-09-13T00:05:49.305652319Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.308441 containerd[1490]: time="2025-09-13T00:05:49.308395633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:49.309538 containerd[1490]: time="2025-09-13T00:05:49.309270875Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.088602927s" Sep 13 00:05:49.309538 containerd[1490]: time="2025-09-13T00:05:49.309311191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:05:50.296942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:05:50.302883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:50.388794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:50.392039 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:50.432708 kubelet[2073]: E0913 00:05:50.432024 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:50.435007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:50.435119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:52.955663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:52.963068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:52.981696 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Sep 13 00:05:52.981904 systemd[1]: Reloading... Sep 13 00:05:53.052792 zram_generator::config[2124]: No configuration found. Sep 13 00:05:53.134038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:53.191971 systemd[1]: Reloading finished in 209 ms. Sep 13 00:05:53.225968 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:05:53.226041 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:05:53.226314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:53.227842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:53.310183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:53.319882 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:53.351353 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:53.351353 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:53.351353 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:53.351687 kubelet[2181]: I0913 00:05:53.351387 2181 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:53.836155 kubelet[2181]: I0913 00:05:53.836108 2181 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:05:53.836155 kubelet[2181]: I0913 00:05:53.836137 2181 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:53.836374 kubelet[2181]: I0913 00:05:53.836346 2181 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:05:53.861269 kubelet[2181]: I0913 00:05:53.861224 2181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:53.868377 kubelet[2181]: E0913 00:05:53.868303 2181 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://157.180.30.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:05:53.881890 kubelet[2181]: E0913 00:05:53.881831 2181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:53.881890 kubelet[2181]: I0913 00:05:53.881873 2181 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:53.889586 kubelet[2181]: I0913 00:05:53.889569 2181 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:53.890757 kubelet[2181]: I0913 00:05:53.890716 2181 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:53.893250 kubelet[2181]: I0913 00:05:53.890741 2181 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c4418ce715","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:05:53.893250 kubelet[2181]: I0913 00:05:53.893236 2181 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:53.893250 kubelet[2181]: I0913 00:05:53.893246 2181 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:05:53.893413 kubelet[2181]: I0913 00:05:53.893347 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:53.895386 kubelet[2181]: I0913 00:05:53.895294 2181 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:05:53.895386 kubelet[2181]: I0913 00:05:53.895310 2181 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:53.896412 kubelet[2181]: I0913 00:05:53.895877 2181 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:05:53.896412 kubelet[2181]: I0913 00:05:53.895896 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:53.902710 kubelet[2181]: E0913 00:05:53.902659 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.30.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c4418ce715&limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:05:53.907480 kubelet[2181]: I0913 00:05:53.907463 2181 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:53.908002 kubelet[2181]: I0913 00:05:53.907986 2181 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:05:53.909575 kubelet[2181]: E0913 00:05:53.909410 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.30.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:05:53.909792 kubelet[2181]: W0913 00:05:53.909762 2181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:05:53.912825 kubelet[2181]: I0913 00:05:53.912762 2181 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:05:53.912933 kubelet[2181]: I0913 00:05:53.912836 2181 server.go:1289] "Started kubelet" Sep 13 00:05:53.916475 kubelet[2181]: I0913 00:05:53.915923 2181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:53.917588 kubelet[2181]: I0913 00:05:53.917168 2181 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:05:53.920088 kubelet[2181]: I0913 00:05:53.919924 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:53.920274 kubelet[2181]: I0913 00:05:53.920255 2181 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:53.921478 kubelet[2181]: I0913 00:05:53.921258 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:53.924596 kubelet[2181]: E0913 00:05:53.920511 2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.30.217:6443/api/v1/namespaces/default/events\": dial tcp 157.180.30.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-c4418ce715.1864aecde4e0ab5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-c4418ce715,UID:ci-4081-3-5-n-c4418ce715,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c4418ce715,},FirstTimestamp:2025-09-13 00:05:53.912793949 +0000 UTC m=+0.589654603,LastTimestamp:2025-09-13 00:05:53.912793949 +0000 UTC m=+0.589654603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c4418ce715,}" Sep 13 00:05:53.927197 kubelet[2181]: I0913 00:05:53.927175 2181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:53.927402 kubelet[2181]: I0913 00:05:53.927313 2181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:53.929969 kubelet[2181]: I0913 00:05:53.929824 2181 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:05:53.930079 kubelet[2181]: E0913 00:05:53.930068 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:53.931308 kubelet[2181]: I0913 00:05:53.930943 2181 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:05:53.931308 kubelet[2181]: I0913 00:05:53.931048 2181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:53.932572 kubelet[2181]: I0913 00:05:53.932560 2181 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:05:53.939189 kubelet[2181]: I0913 00:05:53.938782 2181 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:05:53.939189 kubelet[2181]: I0913 00:05:53.938937 2181 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:53.948863 kubelet[2181]: I0913 00:05:53.948848 2181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:53.948930 kubelet[2181]: I0913 00:05:53.948923 2181 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:05:53.948982 kubelet[2181]: I0913 00:05:53.948975 2181 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:05:53.949019 kubelet[2181]: I0913 00:05:53.949014 2181 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:05:53.949086 kubelet[2181]: E0913 00:05:53.949073 2181 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:53.953433 kubelet[2181]: E0913 00:05:53.953415 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.30.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:05:53.953937 kubelet[2181]: E0913 00:05:53.953921 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.30.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:05:53.959161 kubelet[2181]: E0913 00:05:53.959138 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.30.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c4418ce715?timeout=10s\": dial tcp 157.180.30.217:6443: connect: connection refused" interval="200ms" Sep 13 00:05:53.960391 kubelet[2181]: I0913 00:05:53.960379 2181 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:05:53.960448 kubelet[2181]: I0913 00:05:53.960440 2181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:53.960491 kubelet[2181]: I0913 00:05:53.960486 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:53.962647 kubelet[2181]: I0913 00:05:53.962637 2181 policy_none.go:49] "None policy: Start" Sep 13 00:05:53.962900 kubelet[2181]: I0913 00:05:53.962724 2181 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:05:53.962900 kubelet[2181]: I0913 00:05:53.962738 2181 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:53.967746 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:05:53.978112 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:05:53.980792 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:05:53.989298 kubelet[2181]: E0913 00:05:53.989281 2181 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:05:53.989872 kubelet[2181]: I0913 00:05:53.989491 2181 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:53.989872 kubelet[2181]: I0913 00:05:53.989507 2181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:53.989872 kubelet[2181]: I0913 00:05:53.989712 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:53.991290 kubelet[2181]: E0913 00:05:53.991275 2181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:05:53.991421 kubelet[2181]: E0913 00:05:53.991358 2181 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:54.079196 systemd[1]: Created slice kubepods-burstable-podb5979239783d81365616443ba1ac8384.slice - libcontainer container kubepods-burstable-podb5979239783d81365616443ba1ac8384.slice. Sep 13 00:05:54.085598 kubelet[2181]: E0913 00:05:54.084435 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.090972 kubelet[2181]: I0913 00:05:54.090854 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.093034 kubelet[2181]: E0913 00:05:54.093003 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.30.217:6443/api/v1/nodes\": dial tcp 157.180.30.217:6443: connect: connection refused" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.094272 kubelet[2181]: E0913 00:05:54.093948 2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.30.217:6443/api/v1/namespaces/default/events\": dial tcp 157.180.30.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-c4418ce715.1864aecde4e0ab5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-c4418ce715,UID:ci-4081-3-5-n-c4418ce715,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c4418ce715,},FirstTimestamp:2025-09-13 00:05:53.912793949 +0000 UTC m=+0.589654603,LastTimestamp:2025-09-13 00:05:53.912793949 +0000 UTC m=+0.589654603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c4418ce715,}" Sep 13 00:05:54.098267 systemd[1]: Created slice kubepods-burstable-pod720bf32479d9328046f4eb7792df283e.slice - libcontainer container kubepods-burstable-pod720bf32479d9328046f4eb7792df283e.slice. Sep 13 00:05:54.100261 kubelet[2181]: E0913 00:05:54.100245 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.102045 systemd[1]: Created slice kubepods-burstable-podb6e6317966f8bc14aea33c0d915651c7.slice - libcontainer container kubepods-burstable-podb6e6317966f8bc14aea33c0d915651c7.slice. Sep 13 00:05:54.103795 kubelet[2181]: E0913 00:05:54.103780 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140266 kubelet[2181]: I0913 00:05:54.140241 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140407 kubelet[2181]: I0913 00:05:54.140367 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140407 kubelet[2181]: I0913 00:05:54.140396 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140407 kubelet[2181]: I0913 00:05:54.140412 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140407 kubelet[2181]: I0913 00:05:54.140426 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140407 kubelet[2181]: I0913 00:05:54.140440 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e6317966f8bc14aea33c0d915651c7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c4418ce715\" (UID: \"b6e6317966f8bc14aea33c0d915651c7\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140802 kubelet[2181]: I0913 00:05:54.140452 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140802 kubelet[2181]: I0913 00:05:54.140467 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.140802 kubelet[2181]: I0913 00:05:54.140483 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.159792 kubelet[2181]: E0913 00:05:54.159750 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.30.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c4418ce715?timeout=10s\": dial tcp 157.180.30.217:6443: connect: connection refused" interval="400ms" Sep 13 00:05:54.295177 kubelet[2181]: I0913 00:05:54.295130 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.295493 kubelet[2181]: E0913 00:05:54.295454 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.30.217:6443/api/v1/nodes\": dial tcp 157.180.30.217:6443: connect: connection refused" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.388498 containerd[1490]: time="2025-09-13T00:05:54.388379144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c4418ce715,Uid:b5979239783d81365616443ba1ac8384,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:54.405901 containerd[1490]: time="2025-09-13T00:05:54.405849810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c4418ce715,Uid:b6e6317966f8bc14aea33c0d915651c7,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:54.406174 containerd[1490]: time="2025-09-13T00:05:54.406141528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c4418ce715,Uid:720bf32479d9328046f4eb7792df283e,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:54.560455 kubelet[2181]: E0913 00:05:54.560406 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.30.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c4418ce715?timeout=10s\": dial tcp 157.180.30.217:6443: connect: connection refused" interval="800ms" Sep 13 00:05:54.698452 kubelet[2181]: I0913 00:05:54.698319 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.698662 kubelet[2181]: E0913 00:05:54.698605 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.30.217:6443/api/v1/nodes\": dial tcp 157.180.30.217:6443: connect: connection refused" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:54.824691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354143706.mount: Deactivated successfully. Sep 13 00:05:54.830297 containerd[1490]: time="2025-09-13T00:05:54.830186438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:54.832036 containerd[1490]: time="2025-09-13T00:05:54.831971276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 13 00:05:54.836691 containerd[1490]: time="2025-09-13T00:05:54.834848733Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:54.836691 containerd[1490]: time="2025-09-13T00:05:54.836174349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:54.837112 containerd[1490]: time="2025-09-13T00:05:54.837073284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:54.837420 containerd[1490]: time="2025-09-13T00:05:54.837400318Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:54.839594 containerd[1490]: time="2025-09-13T00:05:54.839571370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:54.840384 containerd[1490]: time="2025-09-13T00:05:54.840341815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:54.840653 containerd[1490]: time="2025-09-13T00:05:54.840619295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.157556ms" Sep 13 00:05:54.844713 containerd[1490]: time="2025-09-13T00:05:54.844540018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.347414ms" Sep 13 00:05:54.850388 containerd[1490]: time="2025-09-13T00:05:54.850341670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.409495ms" Sep 13 00:05:54.970710 containerd[1490]: time="2025-09-13T00:05:54.969929768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:54.972946 containerd[1490]: time="2025-09-13T00:05:54.972907343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:54.973107 containerd[1490]: time="2025-09-13T00:05:54.973033479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.975926 containerd[1490]: time="2025-09-13T00:05:54.975768168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.977812 containerd[1490]: time="2025-09-13T00:05:54.977551643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:54.977812 containerd[1490]: time="2025-09-13T00:05:54.977604652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:54.977812 containerd[1490]: time="2025-09-13T00:05:54.977632905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.977812 containerd[1490]: time="2025-09-13T00:05:54.977746829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.978818 containerd[1490]: time="2025-09-13T00:05:54.978507206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:54.978818 containerd[1490]: time="2025-09-13T00:05:54.978657627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:54.979537 containerd[1490]: time="2025-09-13T00:05:54.979060002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.982900 containerd[1490]: time="2025-09-13T00:05:54.982595382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:55.002840 systemd[1]: Started cri-containerd-66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef.scope - libcontainer container 66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef. Sep 13 00:05:55.008257 systemd[1]: Started cri-containerd-446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9.scope - libcontainer container 446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9. Sep 13 00:05:55.009691 systemd[1]: Started cri-containerd-90794541e8385cb6046d36d502a4dd993522716ee515c2c88bc77970e4e9c5c3.scope - libcontainer container 90794541e8385cb6046d36d502a4dd993522716ee515c2c88bc77970e4e9c5c3. Sep 13 00:05:55.055627 containerd[1490]: time="2025-09-13T00:05:55.055554629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c4418ce715,Uid:b6e6317966f8bc14aea33c0d915651c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef\"" Sep 13 00:05:55.066773 containerd[1490]: time="2025-09-13T00:05:55.066730519Z" level=info msg="CreateContainer within sandbox \"66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:05:55.068061 containerd[1490]: time="2025-09-13T00:05:55.068042780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c4418ce715,Uid:b5979239783d81365616443ba1ac8384,Namespace:kube-system,Attempt:0,} returns sandbox id \"90794541e8385cb6046d36d502a4dd993522716ee515c2c88bc77970e4e9c5c3\"" Sep 13 00:05:55.073523 containerd[1490]: time="2025-09-13T00:05:55.073492191Z" level=info msg="CreateContainer within sandbox \"90794541e8385cb6046d36d502a4dd993522716ee515c2c88bc77970e4e9c5c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:05:55.076346 containerd[1490]: time="2025-09-13T00:05:55.076328780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c4418ce715,Uid:720bf32479d9328046f4eb7792df283e,Namespace:kube-system,Attempt:0,} returns sandbox id \"446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9\"" Sep 13 00:05:55.084794 kubelet[2181]: E0913 00:05:55.084772 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.30.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c4418ce715&limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:05:55.087863 containerd[1490]: time="2025-09-13T00:05:55.087821495Z" level=info msg="CreateContainer within sandbox \"446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:05:55.105402 containerd[1490]: time="2025-09-13T00:05:55.104915836Z" level=info msg="CreateContainer within sandbox \"90794541e8385cb6046d36d502a4dd993522716ee515c2c88bc77970e4e9c5c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4e91a408b41a4f1aff6467c85de68d22f2f5c8da597d25b6317c7b9a67a245b\"" Sep 13 00:05:55.106508 containerd[1490]: time="2025-09-13T00:05:55.106469530Z" level=info msg="CreateContainer within sandbox \"446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26\"" Sep 13 00:05:55.106658 containerd[1490]: time="2025-09-13T00:05:55.106634299Z" level=info msg="StartContainer for \"a4e91a408b41a4f1aff6467c85de68d22f2f5c8da597d25b6317c7b9a67a245b\"" Sep 13 00:05:55.109717 containerd[1490]: time="2025-09-13T00:05:55.108803086Z" level=info msg="CreateContainer within sandbox \"66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c\"" Sep 13 00:05:55.109717 containerd[1490]: time="2025-09-13T00:05:55.108955452Z" level=info msg="StartContainer for \"c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26\"" Sep 13 00:05:55.114737 containerd[1490]: time="2025-09-13T00:05:55.113978623Z" level=info msg="StartContainer for \"fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c\"" Sep 13 00:05:55.129841 systemd[1]: Started cri-containerd-c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26.scope - libcontainer container c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26. Sep 13 00:05:55.145790 systemd[1]: Started cri-containerd-a4e91a408b41a4f1aff6467c85de68d22f2f5c8da597d25b6317c7b9a67a245b.scope - libcontainer container a4e91a408b41a4f1aff6467c85de68d22f2f5c8da597d25b6317c7b9a67a245b. Sep 13 00:05:55.149863 systemd[1]: Started cri-containerd-fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c.scope - libcontainer container fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c. Sep 13 00:05:55.200033 containerd[1490]: time="2025-09-13T00:05:55.199011904Z" level=info msg="StartContainer for \"c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26\" returns successfully" Sep 13 00:05:55.214040 containerd[1490]: time="2025-09-13T00:05:55.213479598Z" level=info msg="StartContainer for \"a4e91a408b41a4f1aff6467c85de68d22f2f5c8da597d25b6317c7b9a67a245b\" returns successfully" Sep 13 00:05:55.218291 containerd[1490]: time="2025-09-13T00:05:55.218260866Z" level=info msg="StartContainer for \"fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c\" returns successfully" Sep 13 00:05:55.220473 kubelet[2181]: E0913 00:05:55.220439 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.30.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:05:55.339737 kubelet[2181]: E0913 00:05:55.339701 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.30.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:05:55.361331 kubelet[2181]: E0913 00:05:55.361295 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.30.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c4418ce715?timeout=10s\": dial tcp 157.180.30.217:6443: connect: connection refused" interval="1.6s" Sep 13 00:05:55.400819 kubelet[2181]: E0913 00:05:55.400780 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.30.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.30.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:05:55.500744 kubelet[2181]: I0913 00:05:55.500704 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:55.501011 kubelet[2181]: E0913 00:05:55.500986 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.30.217:6443/api/v1/nodes\": dial tcp 157.180.30.217:6443: connect: connection refused" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:55.974647 kubelet[2181]: E0913 00:05:55.968138 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:55.980530 kubelet[2181]: E0913 00:05:55.980505 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:55.980749 kubelet[2181]: E0913 00:05:55.980732 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:56.966350 kubelet[2181]: E0913 00:05:56.966255 2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:56.984812 kubelet[2181]: E0913 00:05:56.984735 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:56.985220 kubelet[2181]: E0913 00:05:56.985149 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-c4418ce715\" not found" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:57.104275 kubelet[2181]: I0913 00:05:57.104162 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:57.120113 kubelet[2181]: I0913 00:05:57.120047 2181 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:57.120113 kubelet[2181]: E0913 00:05:57.120106 2181 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-c4418ce715\": node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.133343 kubelet[2181]: E0913 00:05:57.133275 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.234570 kubelet[2181]: E0913 00:05:57.234382 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.335465 kubelet[2181]: E0913 00:05:57.335412 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.436040 kubelet[2181]: E0913 00:05:57.435964 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.536504 kubelet[2181]: E0913 00:05:57.536355 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.636548 kubelet[2181]: E0913 00:05:57.636481 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.737430 kubelet[2181]: E0913 00:05:57.737385 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.838271 kubelet[2181]: E0913 00:05:57.838216 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:57.938747 kubelet[2181]: E0913 00:05:57.938701 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:58.039147 kubelet[2181]: E0913 00:05:58.039077 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:58.134022 kubelet[2181]: I0913 00:05:58.133871 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:58.149043 kubelet[2181]: I0913 00:05:58.149003 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:58.154055 kubelet[2181]: I0913 00:05:58.154019 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:58.768584 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-7.scope)... Sep 13 00:05:58.768607 systemd[1]: Reloading... Sep 13 00:05:58.865725 zram_generator::config[2508]: No configuration found. Sep 13 00:05:58.907143 kubelet[2181]: I0913 00:05:58.906922 2181 apiserver.go:52] "Watching apiserver" Sep 13 00:05:58.939422 kubelet[2181]: I0913 00:05:58.939371 2181 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:05:58.954113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:59.032579 systemd[1]: Reloading finished in 263 ms. Sep 13 00:05:59.073914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:59.093812 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:05:59.094001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:59.099950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:59.195435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:59.198901 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:59.243837 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:59.243837 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:59.243837 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:59.244255 kubelet[2556]: I0913 00:05:59.243876 2556 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:59.252811 kubelet[2556]: I0913 00:05:59.252777 2556 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:05:59.252811 kubelet[2556]: I0913 00:05:59.252798 2556 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:59.252970 kubelet[2556]: I0913 00:05:59.252955 2556 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:05:59.255225 kubelet[2556]: I0913 00:05:59.255108 2556 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:05:59.260927 kubelet[2556]: I0913 00:05:59.260893 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:59.264302 kubelet[2556]: E0913 00:05:59.264271 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:59.264302 kubelet[2556]: I0913 00:05:59.264299 2556 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:59.268630 kubelet[2556]: I0913 00:05:59.268583 2556 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:59.270997 kubelet[2556]: I0913 00:05:59.270969 2556 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:59.271233 kubelet[2556]: I0913 00:05:59.271068 2556 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c4418ce715","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:05:59.272437 kubelet[2556]: I0913 00:05:59.272420 2556 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:59.272499 kubelet[2556]: I0913 00:05:59.272491 2556 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:05:59.272589 kubelet[2556]: I0913 00:05:59.272580 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:59.272828 kubelet[2556]: I0913 00:05:59.272815 2556 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:05:59.272893 kubelet[2556]: I0913 00:05:59.272884 2556 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:59.272965 kubelet[2556]: I0913 00:05:59.272957 2556 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:05:59.273024 kubelet[2556]: I0913 00:05:59.273015 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:59.277530 kubelet[2556]: I0913 00:05:59.277513 2556 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:59.279616 kubelet[2556]: I0913 00:05:59.279601 2556 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:05:59.284114 kubelet[2556]: I0913 00:05:59.284029 2556 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:05:59.284290 kubelet[2556]: I0913 00:05:59.284279 2556 server.go:1289] "Started kubelet" Sep 13 00:05:59.286481 kubelet[2556]: I0913 00:05:59.286469 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:59.290048 kubelet[2556]: I0913 00:05:59.289612 2556 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:59.290633 kubelet[2556]: I0913 00:05:59.290393 2556 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:05:59.295381 kubelet[2556]: I0913 00:05:59.295326 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:59.295520 kubelet[2556]: I0913 00:05:59.295498 2556 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:59.295787 kubelet[2556]: I0913 00:05:59.295761 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:59.297440 kubelet[2556]: I0913 00:05:59.297416 2556 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:05:59.297594 kubelet[2556]: E0913 00:05:59.297565 2556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c4418ce715\" not found" Sep 13 00:05:59.299200 kubelet[2556]: I0913 00:05:59.298809 2556 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:05:59.299200 kubelet[2556]: I0913 00:05:59.299149 2556 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:59.300357 kubelet[2556]: I0913 00:05:59.300339 2556 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:05:59.300509 kubelet[2556]: I0913 00:05:59.300493 2556 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:59.301532 kubelet[2556]: I0913 00:05:59.301486 2556 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:59.303384 kubelet[2556]: I0913 00:05:59.303357 2556 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:59.303492 kubelet[2556]: I0913 00:05:59.303473 2556 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:05:59.303574 kubelet[2556]: I0913 00:05:59.303501 2556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:05:59.303574 kubelet[2556]: I0913 00:05:59.303508 2556 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:05:59.303574 kubelet[2556]: E0913 00:05:59.303574 2556 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:59.303944 kubelet[2556]: I0913 00:05:59.303829 2556 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:05:59.347877 kubelet[2556]: I0913 00:05:59.347855 2556 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348050 2556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348069 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348192 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348201 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348216 2556 policy_none.go:49] "None policy: Start" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348225 2556 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348233 2556 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:59.348455 kubelet[2556]: I0913 00:05:59.348304 2556 state_mem.go:75] "Updated machine memory state" Sep 13 00:05:59.351894 kubelet[2556]: E0913 00:05:59.351867 2556 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:05:59.352028 kubelet[2556]: I0913 00:05:59.352007 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:59.352056 kubelet[2556]: I0913 00:05:59.352023 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:59.352548 kubelet[2556]: I0913 00:05:59.352493 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:59.353377 kubelet[2556]: E0913 00:05:59.353301 2556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:05:59.404618 kubelet[2556]: I0913 00:05:59.404504 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.404618 kubelet[2556]: I0913 00:05:59.404544 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.405261 kubelet[2556]: I0913 00:05:59.405246 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.414963 kubelet[2556]: E0913 00:05:59.414859 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.415010 kubelet[2556]: E0913 00:05:59.414989 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.415122 kubelet[2556]: E0913 00:05:59.415046 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.454151 kubelet[2556]: I0913 00:05:59.454106 2556 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.463011 kubelet[2556]: I0913 00:05:59.462795 2556 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.463011 kubelet[2556]: I0913 00:05:59.462852 2556 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602087 kubelet[2556]: I0913 00:05:59.601988 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602087 kubelet[2556]: I0913 00:05:59.602054 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602492 kubelet[2556]: I0913 00:05:59.602120 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602492 kubelet[2556]: I0913 00:05:59.602158 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602492 kubelet[2556]: I0913 00:05:59.602242 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602492 kubelet[2556]: I0913 00:05:59.602283 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5979239783d81365616443ba1ac8384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" (UID: \"b5979239783d81365616443ba1ac8384\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602492 kubelet[2556]: I0913 00:05:59.602307 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602642 kubelet[2556]: I0913 00:05:59.602327 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/720bf32479d9328046f4eb7792df283e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" (UID: \"720bf32479d9328046f4eb7792df283e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.602642 kubelet[2556]: I0913 00:05:59.602364 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e6317966f8bc14aea33c0d915651c7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c4418ce715\" (UID: \"b6e6317966f8bc14aea33c0d915651c7\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:05:59.783063 sudo[2595]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:05:59.783377 sudo[2595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:06:00.274330 kubelet[2556]: I0913 00:06:00.274298 2556 apiserver.go:52] "Watching apiserver" Sep 13 00:06:00.300335 kubelet[2556]: I0913 00:06:00.299886 2556 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:06:00.335821 kubelet[2556]: I0913 00:06:00.335796 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.336446 kubelet[2556]: I0913 00:06:00.336416 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.336633 kubelet[2556]: I0913 00:06:00.336609 2556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.347452 kubelet[2556]: E0913 00:06:00.347335 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.349952 kubelet[2556]: E0913 00:06:00.349929 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.350130 kubelet[2556]: E0913 00:06:00.350111 2556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-c4418ce715\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" Sep 13 00:06:00.367437 sudo[2595]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:00.379693 kubelet[2556]: I0913 00:06:00.378533 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c4418ce715" podStartSLOduration=2.378519273 podStartE2EDuration="2.378519273s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:00.377767213 +0000 UTC m=+1.171141590" watchObservedRunningTime="2025-09-13 00:06:00.378519273 +0000 UTC m=+1.171893650" Sep 13 00:06:00.394644 kubelet[2556]: I0913 00:06:00.394596 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c4418ce715" podStartSLOduration=2.394581447 podStartE2EDuration="2.394581447s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:00.387118801 +0000 UTC m=+1.180493169" watchObservedRunningTime="2025-09-13 00:06:00.394581447 +0000 UTC m=+1.187955825" Sep 13 00:06:00.403821 kubelet[2556]: I0913 00:06:00.403781 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c4418ce715" podStartSLOduration=2.403770692 podStartE2EDuration="2.403770692s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:00.395127361 +0000 UTC m=+1.188501738" watchObservedRunningTime="2025-09-13 00:06:00.403770692 +0000 UTC m=+1.197145069" Sep 13 00:06:01.326292 update_engine[1474]: I20250913 00:06:01.325745 1474 update_attempter.cc:509] Updating boot flags... Sep 13 00:06:01.366988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2626) Sep 13 00:06:01.424396 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2621) Sep 13 00:06:01.683257 sudo[1688]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:01.859039 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:01.862343 systemd[1]: sshd@6-157.180.30.217:22-147.75.109.163:47406.service: Deactivated successfully. Sep 13 00:06:01.864828 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:01.865033 systemd[1]: session-7.scope: Consumed 4.977s CPU time, 144.8M memory peak, 0B memory swap peak. Sep 13 00:06:01.866524 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:01.868288 systemd-logind[1471]: Removed session 7. Sep 13 00:06:03.724220 kubelet[2556]: I0913 00:06:03.724183 2556 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:03.724579 containerd[1490]: time="2025-09-13T00:06:03.724537194Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:03.724917 kubelet[2556]: I0913 00:06:03.724877 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:04.890856 systemd[1]: Created slice kubepods-besteffort-pod785d329b_2bf8_4304_ab57_4d38a9eb38f6.slice - libcontainer container kubepods-besteffort-pod785d329b_2bf8_4304_ab57_4d38a9eb38f6.slice. Sep 13 00:06:04.903469 systemd[1]: Created slice kubepods-burstable-pod6644efb3_30af_4b55_b4ab_57f748061b1e.slice - libcontainer container kubepods-burstable-pod6644efb3_30af_4b55_b4ab_57f748061b1e.slice. Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932304 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-etc-cni-netd\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932334 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-lib-modules\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932381 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-xtables-lock\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932397 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/785d329b-2bf8-4304-ab57-4d38a9eb38f6-lib-modules\") pod \"kube-proxy-lztc2\" (UID: \"785d329b-2bf8-4304-ab57-4d38a9eb38f6\") " pod="kube-system/kube-proxy-lztc2" Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932408 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-hostproc\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.932964 kubelet[2556]: I0913 00:06:04.932419 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-net\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932430 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-hubble-tls\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932440 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-run\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932451 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/785d329b-2bf8-4304-ab57-4d38a9eb38f6-kube-proxy\") pod \"kube-proxy-lztc2\" (UID: \"785d329b-2bf8-4304-ab57-4d38a9eb38f6\") " pod="kube-system/kube-proxy-lztc2" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932464 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b224\" (UniqueName: \"kubernetes.io/projected/785d329b-2bf8-4304-ab57-4d38a9eb38f6-kube-api-access-9b224\") pod \"kube-proxy-lztc2\" (UID: \"785d329b-2bf8-4304-ab57-4d38a9eb38f6\") " pod="kube-system/kube-proxy-lztc2" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932476 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-cgroup\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933379 kubelet[2556]: I0913 00:06:04.932487 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cni-path\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933488 kubelet[2556]: I0913 00:06:04.932501 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-config-path\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933488 kubelet[2556]: I0913 00:06:04.932513 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-bpf-maps\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933488 kubelet[2556]: I0913 00:06:04.932523 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6644efb3-30af-4b55-b4ab-57f748061b1e-clustermesh-secrets\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933488 kubelet[2556]: I0913 00:06:04.932533 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-kernel\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933488 kubelet[2556]: I0913 00:06:04.932545 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jmr\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-kube-api-access-s5jmr\") pod \"cilium-w9hfs\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " pod="kube-system/cilium-w9hfs" Sep 13 00:06:04.933571 kubelet[2556]: I0913 00:06:04.932557 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/785d329b-2bf8-4304-ab57-4d38a9eb38f6-xtables-lock\") pod \"kube-proxy-lztc2\" (UID: \"785d329b-2bf8-4304-ab57-4d38a9eb38f6\") " pod="kube-system/kube-proxy-lztc2" Sep 13 00:06:04.987310 systemd[1]: Created slice kubepods-besteffort-pod38dde979_faad_4dc0_9dd0_070f6a4bbf46.slice - libcontainer container kubepods-besteffort-pod38dde979_faad_4dc0_9dd0_070f6a4bbf46.slice. Sep 13 00:06:05.033957 kubelet[2556]: I0913 00:06:05.033467 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8pf\" (UniqueName: \"kubernetes.io/projected/38dde979-faad-4dc0-9dd0-070f6a4bbf46-kube-api-access-kf8pf\") pod \"cilium-operator-6c4d7847fc-m7rfm\" (UID: \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\") " pod="kube-system/cilium-operator-6c4d7847fc-m7rfm" Sep 13 00:06:05.033957 kubelet[2556]: I0913 00:06:05.033526 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38dde979-faad-4dc0-9dd0-070f6a4bbf46-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m7rfm\" (UID: \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\") " pod="kube-system/cilium-operator-6c4d7847fc-m7rfm" Sep 13 00:06:05.201960 containerd[1490]: time="2025-09-13T00:06:05.201837996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lztc2,Uid:785d329b-2bf8-4304-ab57-4d38a9eb38f6,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:05.209896 containerd[1490]: time="2025-09-13T00:06:05.209856094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9hfs,Uid:6644efb3-30af-4b55-b4ab-57f748061b1e,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:05.234256 containerd[1490]: time="2025-09-13T00:06:05.234060089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:05.234949 containerd[1490]: time="2025-09-13T00:06:05.234882691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:05.235023 containerd[1490]: time="2025-09-13T00:06:05.234939949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.235173 containerd[1490]: time="2025-09-13T00:06:05.235089700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.238337 containerd[1490]: time="2025-09-13T00:06:05.238248815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:05.238552 containerd[1490]: time="2025-09-13T00:06:05.238511858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:05.238552 containerd[1490]: time="2025-09-13T00:06:05.238536214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.239040 containerd[1490]: time="2025-09-13T00:06:05.238982060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.256823 systemd[1]: Started cri-containerd-6f86b4c996bc3381334ae7b660301181b97fb808716dc8b0bb81a02b60e4ed53.scope - libcontainer container 6f86b4c996bc3381334ae7b660301181b97fb808716dc8b0bb81a02b60e4ed53. Sep 13 00:06:05.258628 systemd[1]: Started cri-containerd-836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98.scope - libcontainer container 836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98. Sep 13 00:06:05.287225 containerd[1490]: time="2025-09-13T00:06:05.286820281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lztc2,Uid:785d329b-2bf8-4304-ab57-4d38a9eb38f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f86b4c996bc3381334ae7b660301181b97fb808716dc8b0bb81a02b60e4ed53\"" Sep 13 00:06:05.290257 containerd[1490]: time="2025-09-13T00:06:05.290069045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7rfm,Uid:38dde979-faad-4dc0-9dd0-070f6a4bbf46,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:05.292972 containerd[1490]: time="2025-09-13T00:06:05.292723402Z" level=info msg="CreateContainer within sandbox \"6f86b4c996bc3381334ae7b660301181b97fb808716dc8b0bb81a02b60e4ed53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:05.298706 containerd[1490]: time="2025-09-13T00:06:05.298287127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9hfs,Uid:6644efb3-30af-4b55-b4ab-57f748061b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\"" Sep 13 00:06:05.301934 containerd[1490]: time="2025-09-13T00:06:05.301855379Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:06:05.319683 containerd[1490]: time="2025-09-13T00:06:05.318881262Z" level=info msg="CreateContainer within sandbox \"6f86b4c996bc3381334ae7b660301181b97fb808716dc8b0bb81a02b60e4ed53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1b9d5a328404b9fc6b15bd81e7e73b70ea2e52e9922c1e2384b497fab89510c\"" Sep 13 00:06:05.320956 containerd[1490]: time="2025-09-13T00:06:05.320919525Z" level=info msg="StartContainer for \"f1b9d5a328404b9fc6b15bd81e7e73b70ea2e52e9922c1e2384b497fab89510c\"" Sep 13 00:06:05.325976 containerd[1490]: time="2025-09-13T00:06:05.325804106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:05.326188 containerd[1490]: time="2025-09-13T00:06:05.325904865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:05.326188 containerd[1490]: time="2025-09-13T00:06:05.326165514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.326949 containerd[1490]: time="2025-09-13T00:06:05.326821183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:05.345820 systemd[1]: Started cri-containerd-87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c.scope - libcontainer container 87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c. Sep 13 00:06:05.361786 systemd[1]: Started cri-containerd-f1b9d5a328404b9fc6b15bd81e7e73b70ea2e52e9922c1e2384b497fab89510c.scope - libcontainer container f1b9d5a328404b9fc6b15bd81e7e73b70ea2e52e9922c1e2384b497fab89510c. Sep 13 00:06:05.397384 containerd[1490]: time="2025-09-13T00:06:05.397294397Z" level=info msg="StartContainer for \"f1b9d5a328404b9fc6b15bd81e7e73b70ea2e52e9922c1e2384b497fab89510c\" returns successfully" Sep 13 00:06:05.401377 containerd[1490]: time="2025-09-13T00:06:05.401330486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7rfm,Uid:38dde979-faad-4dc0-9dd0-070f6a4bbf46,Namespace:kube-system,Attempt:0,} returns sandbox id \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\"" Sep 13 00:06:11.247117 kubelet[2556]: I0913 00:06:11.244816 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lztc2" podStartSLOduration=7.23999664 podStartE2EDuration="7.23999664s" podCreationTimestamp="2025-09-13 00:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:06.363235299 +0000 UTC m=+7.156609705" watchObservedRunningTime="2025-09-13 00:06:11.23999664 +0000 UTC m=+12.033371007" Sep 13 00:06:11.660477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965344813.mount: Deactivated successfully. Sep 13 00:06:12.875133 containerd[1490]: time="2025-09-13T00:06:12.875070675Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:12.876297 containerd[1490]: time="2025-09-13T00:06:12.876194212Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 00:06:12.877047 containerd[1490]: time="2025-09-13T00:06:12.877011591Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:12.878267 containerd[1490]: time="2025-09-13T00:06:12.878172160Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.57626866s" Sep 13 00:06:12.878267 containerd[1490]: time="2025-09-13T00:06:12.878197640Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:06:12.879531 containerd[1490]: time="2025-09-13T00:06:12.879504916Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:06:12.890939 containerd[1490]: time="2025-09-13T00:06:12.890909090Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:06:12.941281 containerd[1490]: time="2025-09-13T00:06:12.941238871Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\"" Sep 13 00:06:12.943513 containerd[1490]: time="2025-09-13T00:06:12.942799042Z" level=info msg="StartContainer for \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\"" Sep 13 00:06:13.016883 systemd[1]: Started cri-containerd-85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b.scope - libcontainer container 85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b. Sep 13 00:06:13.035941 containerd[1490]: time="2025-09-13T00:06:13.035905966Z" level=info msg="StartContainer for \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\" returns successfully" Sep 13 00:06:13.051912 systemd[1]: cri-containerd-85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b.scope: Deactivated successfully. Sep 13 00:06:13.118270 containerd[1490]: time="2025-09-13T00:06:13.103637981Z" level=info msg="shim disconnected" id=85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b namespace=k8s.io Sep 13 00:06:13.118270 containerd[1490]: time="2025-09-13T00:06:13.118233633Z" level=warning msg="cleaning up after shim disconnected" id=85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b namespace=k8s.io Sep 13 00:06:13.118270 containerd[1490]: time="2025-09-13T00:06:13.118252008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:13.373042 containerd[1490]: time="2025-09-13T00:06:13.372983775Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:06:13.386536 containerd[1490]: time="2025-09-13T00:06:13.386494379Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\"" Sep 13 00:06:13.388692 containerd[1490]: time="2025-09-13T00:06:13.387166871Z" level=info msg="StartContainer for \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\"" Sep 13 00:06:13.416824 systemd[1]: Started cri-containerd-66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f.scope - libcontainer container 66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f. Sep 13 00:06:13.445589 containerd[1490]: time="2025-09-13T00:06:13.445499550Z" level=info msg="StartContainer for \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\" returns successfully" Sep 13 00:06:13.457581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:13.457847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:13.457909 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:13.463939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:13.464125 systemd[1]: cri-containerd-66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f.scope: Deactivated successfully. Sep 13 00:06:13.490996 containerd[1490]: time="2025-09-13T00:06:13.490856037Z" level=info msg="shim disconnected" id=66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f namespace=k8s.io Sep 13 00:06:13.491200 containerd[1490]: time="2025-09-13T00:06:13.491077861Z" level=warning msg="cleaning up after shim disconnected" id=66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f namespace=k8s.io Sep 13 00:06:13.491200 containerd[1490]: time="2025-09-13T00:06:13.491089333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:13.502013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:13.931903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b-rootfs.mount: Deactivated successfully. Sep 13 00:06:14.377275 containerd[1490]: time="2025-09-13T00:06:14.377201663Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:06:14.420038 containerd[1490]: time="2025-09-13T00:06:14.418722062Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\"" Sep 13 00:06:14.420792 containerd[1490]: time="2025-09-13T00:06:14.420648571Z" level=info msg="StartContainer for \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\"" Sep 13 00:06:14.460964 systemd[1]: Started cri-containerd-c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a.scope - libcontainer container c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a. Sep 13 00:06:14.495902 containerd[1490]: time="2025-09-13T00:06:14.495857197Z" level=info msg="StartContainer for \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\" returns successfully" Sep 13 00:06:14.497856 systemd[1]: cri-containerd-c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a.scope: Deactivated successfully. Sep 13 00:06:14.526743 containerd[1490]: time="2025-09-13T00:06:14.526662484Z" level=info msg="shim disconnected" id=c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a namespace=k8s.io Sep 13 00:06:14.526743 containerd[1490]: time="2025-09-13T00:06:14.526731599Z" level=warning msg="cleaning up after shim disconnected" id=c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a namespace=k8s.io Sep 13 00:06:14.526743 containerd[1490]: time="2025-09-13T00:06:14.526741047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:14.932654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a-rootfs.mount: Deactivated successfully. Sep 13 00:06:15.031954 containerd[1490]: time="2025-09-13T00:06:15.031907375Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:15.032891 containerd[1490]: time="2025-09-13T00:06:15.032700054Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 00:06:15.033700 containerd[1490]: time="2025-09-13T00:06:15.033620802Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:15.041032 containerd[1490]: time="2025-09-13T00:06:15.040957296Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.161424405s" Sep 13 00:06:15.041032 containerd[1490]: time="2025-09-13T00:06:15.041011080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:06:15.045363 containerd[1490]: time="2025-09-13T00:06:15.045312262Z" level=info msg="CreateContainer within sandbox \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:06:15.104806 containerd[1490]: time="2025-09-13T00:06:15.104742100Z" level=info msg="CreateContainer within sandbox \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\"" Sep 13 00:06:15.106663 containerd[1490]: time="2025-09-13T00:06:15.105838249Z" level=info msg="StartContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\"" Sep 13 00:06:15.138874 systemd[1]: Started cri-containerd-2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3.scope - libcontainer container 2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3. Sep 13 00:06:15.164578 containerd[1490]: time="2025-09-13T00:06:15.164545405Z" level=info msg="StartContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" returns successfully" Sep 13 00:06:15.407872 containerd[1490]: time="2025-09-13T00:06:15.407771381Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:06:15.422183 containerd[1490]: time="2025-09-13T00:06:15.422131834Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\"" Sep 13 00:06:15.424773 containerd[1490]: time="2025-09-13T00:06:15.422871440Z" level=info msg="StartContainer for \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\"" Sep 13 00:06:15.426425 kubelet[2556]: I0913 00:06:15.426354 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m7rfm" podStartSLOduration=1.7873424249999998 podStartE2EDuration="11.426335476s" podCreationTimestamp="2025-09-13 00:06:04 +0000 UTC" firstStartedPulling="2025-09-13 00:06:05.402726033 +0000 UTC m=+6.196100401" lastFinishedPulling="2025-09-13 00:06:15.041719085 +0000 UTC m=+15.835093452" observedRunningTime="2025-09-13 00:06:15.392452258 +0000 UTC m=+16.185826626" watchObservedRunningTime="2025-09-13 00:06:15.426335476 +0000 UTC m=+16.219709842" Sep 13 00:06:15.453812 systemd[1]: Started cri-containerd-bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229.scope - libcontainer container bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229. Sep 13 00:06:15.482594 containerd[1490]: time="2025-09-13T00:06:15.482554430Z" level=info msg="StartContainer for \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\" returns successfully" Sep 13 00:06:15.488169 systemd[1]: cri-containerd-bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229.scope: Deactivated successfully. Sep 13 00:06:15.509760 containerd[1490]: time="2025-09-13T00:06:15.509693884Z" level=info msg="shim disconnected" id=bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229 namespace=k8s.io Sep 13 00:06:15.509760 containerd[1490]: time="2025-09-13T00:06:15.509746576Z" level=warning msg="cleaning up after shim disconnected" id=bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229 namespace=k8s.io Sep 13 00:06:15.509760 containerd[1490]: time="2025-09-13T00:06:15.509754701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:15.932328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219838037.mount: Deactivated successfully. Sep 13 00:06:16.411149 containerd[1490]: time="2025-09-13T00:06:16.411111643Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:06:16.454992 containerd[1490]: time="2025-09-13T00:06:16.454924078Z" level=info msg="CreateContainer within sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\"" Sep 13 00:06:16.455604 containerd[1490]: time="2025-09-13T00:06:16.455574508Z" level=info msg="StartContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\"" Sep 13 00:06:16.489808 systemd[1]: Started cri-containerd-25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9.scope - libcontainer container 25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9. Sep 13 00:06:16.516462 containerd[1490]: time="2025-09-13T00:06:16.516395116Z" level=info msg="StartContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" returns successfully" Sep 13 00:06:16.668457 kubelet[2556]: I0913 00:06:16.667943 2556 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:06:16.715010 systemd[1]: Created slice kubepods-burstable-pod049a517d_07c1_491a_8547_41531f95d918.slice - libcontainer container kubepods-burstable-pod049a517d_07c1_491a_8547_41531f95d918.slice. Sep 13 00:06:16.720565 systemd[1]: Created slice kubepods-burstable-pod346e61bf_1088_4967_bac5_077ee1880dfd.slice - libcontainer container kubepods-burstable-pod346e61bf_1088_4967_bac5_077ee1880dfd.slice. Sep 13 00:06:16.721920 kubelet[2556]: I0913 00:06:16.721900 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldsh\" (UniqueName: \"kubernetes.io/projected/049a517d-07c1-491a-8547-41531f95d918-kube-api-access-7ldsh\") pod \"coredns-674b8bbfcf-hrj2c\" (UID: \"049a517d-07c1-491a-8547-41531f95d918\") " pod="kube-system/coredns-674b8bbfcf-hrj2c" Sep 13 00:06:16.722024 kubelet[2556]: I0913 00:06:16.722011 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a517d-07c1-491a-8547-41531f95d918-config-volume\") pod \"coredns-674b8bbfcf-hrj2c\" (UID: \"049a517d-07c1-491a-8547-41531f95d918\") " pod="kube-system/coredns-674b8bbfcf-hrj2c" Sep 13 00:06:16.722310 kubelet[2556]: I0913 00:06:16.722296 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvhm\" (UniqueName: \"kubernetes.io/projected/346e61bf-1088-4967-bac5-077ee1880dfd-kube-api-access-sxvhm\") pod \"coredns-674b8bbfcf-xlqvb\" (UID: \"346e61bf-1088-4967-bac5-077ee1880dfd\") " pod="kube-system/coredns-674b8bbfcf-xlqvb" Sep 13 00:06:16.722371 kubelet[2556]: I0913 00:06:16.722361 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/346e61bf-1088-4967-bac5-077ee1880dfd-config-volume\") pod \"coredns-674b8bbfcf-xlqvb\" (UID: \"346e61bf-1088-4967-bac5-077ee1880dfd\") " pod="kube-system/coredns-674b8bbfcf-xlqvb" Sep 13 00:06:16.936619 systemd[1]: run-containerd-runc-k8s.io-25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9-runc.NosxZF.mount: Deactivated successfully. Sep 13 00:06:17.023629 containerd[1490]: time="2025-09-13T00:06:17.023579300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xlqvb,Uid:346e61bf-1088-4967-bac5-077ee1880dfd,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:17.024780 containerd[1490]: time="2025-09-13T00:06:17.023954746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrj2c,Uid:049a517d-07c1-491a-8547-41531f95d918,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:18.744449 systemd-networkd[1395]: cilium_host: Link UP Sep 13 00:06:18.747392 systemd-networkd[1395]: cilium_net: Link UP Sep 13 00:06:18.747772 systemd-networkd[1395]: cilium_net: Gained carrier Sep 13 00:06:18.748118 systemd-networkd[1395]: cilium_host: Gained carrier Sep 13 00:06:18.929532 systemd-networkd[1395]: cilium_vxlan: Link UP Sep 13 00:06:18.929549 systemd-networkd[1395]: cilium_vxlan: Gained carrier Sep 13 00:06:19.122945 systemd-networkd[1395]: cilium_net: Gained IPv6LL Sep 13 00:06:19.481739 kernel: NET: Registered PF_ALG protocol family Sep 13 00:06:19.618246 systemd-networkd[1395]: cilium_host: Gained IPv6LL Sep 13 00:06:20.131342 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Sep 13 00:06:20.212875 systemd-networkd[1395]: lxc_health: Link UP Sep 13 00:06:20.220970 systemd-networkd[1395]: lxc_health: Gained carrier Sep 13 00:06:20.590815 systemd-networkd[1395]: lxc5f557a1b43eb: Link UP Sep 13 00:06:20.597735 kernel: eth0: renamed from tmp12973 Sep 13 00:06:20.607746 systemd-networkd[1395]: lxc5f557a1b43eb: Gained carrier Sep 13 00:06:20.608608 systemd-networkd[1395]: lxcc1fdaefeff01: Link UP Sep 13 00:06:20.614786 kernel: eth0: renamed from tmp5c4d7 Sep 13 00:06:20.618457 systemd-networkd[1395]: lxcc1fdaefeff01: Gained carrier Sep 13 00:06:21.231196 kubelet[2556]: I0913 00:06:21.230419 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w9hfs" podStartSLOduration=9.651312107999999 podStartE2EDuration="17.230403304s" podCreationTimestamp="2025-09-13 00:06:04 +0000 UTC" firstStartedPulling="2025-09-13 00:06:05.300222207 +0000 UTC m=+6.093596573" lastFinishedPulling="2025-09-13 00:06:12.879313403 +0000 UTC m=+13.672687769" observedRunningTime="2025-09-13 00:06:17.442080852 +0000 UTC m=+18.235455300" watchObservedRunningTime="2025-09-13 00:06:21.230403304 +0000 UTC m=+22.023777691" Sep 13 00:06:21.601909 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 13 00:06:21.985829 systemd-networkd[1395]: lxc5f557a1b43eb: Gained IPv6LL Sep 13 00:06:22.497841 systemd-networkd[1395]: lxcc1fdaefeff01: Gained IPv6LL Sep 13 00:06:23.594809 containerd[1490]: time="2025-09-13T00:06:23.593422918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:23.594809 containerd[1490]: time="2025-09-13T00:06:23.593469677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:23.594809 containerd[1490]: time="2025-09-13T00:06:23.593481831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:23.594809 containerd[1490]: time="2025-09-13T00:06:23.593536505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:23.630794 systemd[1]: Started cri-containerd-12973abf12337e11710d7bb0ea43a1eb256e1f98c5858a29b96d0c5fe10cafb1.scope - libcontainer container 12973abf12337e11710d7bb0ea43a1eb256e1f98c5858a29b96d0c5fe10cafb1. Sep 13 00:06:23.643975 containerd[1490]: time="2025-09-13T00:06:23.641317253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:23.643975 containerd[1490]: time="2025-09-13T00:06:23.641356398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:23.643975 containerd[1490]: time="2025-09-13T00:06:23.641365896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:23.643975 containerd[1490]: time="2025-09-13T00:06:23.641452171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:23.662782 systemd[1]: Started cri-containerd-5c4d76a4c2f195c406fcc9eca012686a1208831359c5d9d8fc9a60c3e4cf7fad.scope - libcontainer container 5c4d76a4c2f195c406fcc9eca012686a1208831359c5d9d8fc9a60c3e4cf7fad. Sep 13 00:06:23.696938 containerd[1490]: time="2025-09-13T00:06:23.696901363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xlqvb,Uid:346e61bf-1088-4967-bac5-077ee1880dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"12973abf12337e11710d7bb0ea43a1eb256e1f98c5858a29b96d0c5fe10cafb1\"" Sep 13 00:06:23.702695 containerd[1490]: time="2025-09-13T00:06:23.702621447Z" level=info msg="CreateContainer within sandbox \"12973abf12337e11710d7bb0ea43a1eb256e1f98c5858a29b96d0c5fe10cafb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:23.727829 containerd[1490]: time="2025-09-13T00:06:23.727750628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrj2c,Uid:049a517d-07c1-491a-8547-41531f95d918,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c4d76a4c2f195c406fcc9eca012686a1208831359c5d9d8fc9a60c3e4cf7fad\"" Sep 13 00:06:23.729607 containerd[1490]: time="2025-09-13T00:06:23.729578919Z" level=info msg="CreateContainer within sandbox \"12973abf12337e11710d7bb0ea43a1eb256e1f98c5858a29b96d0c5fe10cafb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"538b76da09dec3686ed7c942baa225e88fcf4eca36ea505947e023d3c34ff8fd\"" Sep 13 00:06:23.732115 containerd[1490]: time="2025-09-13T00:06:23.732039992Z" level=info msg="StartContainer for \"538b76da09dec3686ed7c942baa225e88fcf4eca36ea505947e023d3c34ff8fd\"" Sep 13 00:06:23.737362 containerd[1490]: time="2025-09-13T00:06:23.737329442Z" level=info msg="CreateContainer within sandbox \"5c4d76a4c2f195c406fcc9eca012686a1208831359c5d9d8fc9a60c3e4cf7fad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:23.758613 containerd[1490]: time="2025-09-13T00:06:23.758586307Z" level=info msg="CreateContainer within sandbox \"5c4d76a4c2f195c406fcc9eca012686a1208831359c5d9d8fc9a60c3e4cf7fad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdc5ca09743c14cc17289ca2e6987076e0849cdddb16b58dac5664e95c3ac86c\"" Sep 13 00:06:23.759960 containerd[1490]: time="2025-09-13T00:06:23.759814379Z" level=info msg="StartContainer for \"bdc5ca09743c14cc17289ca2e6987076e0849cdddb16b58dac5664e95c3ac86c\"" Sep 13 00:06:23.776555 systemd[1]: Started cri-containerd-538b76da09dec3686ed7c942baa225e88fcf4eca36ea505947e023d3c34ff8fd.scope - libcontainer container 538b76da09dec3686ed7c942baa225e88fcf4eca36ea505947e023d3c34ff8fd. Sep 13 00:06:23.795816 systemd[1]: Started cri-containerd-bdc5ca09743c14cc17289ca2e6987076e0849cdddb16b58dac5664e95c3ac86c.scope - libcontainer container bdc5ca09743c14cc17289ca2e6987076e0849cdddb16b58dac5664e95c3ac86c. Sep 13 00:06:23.814124 containerd[1490]: time="2025-09-13T00:06:23.814094743Z" level=info msg="StartContainer for \"538b76da09dec3686ed7c942baa225e88fcf4eca36ea505947e023d3c34ff8fd\" returns successfully" Sep 13 00:06:23.836707 containerd[1490]: time="2025-09-13T00:06:23.835589565Z" level=info msg="StartContainer for \"bdc5ca09743c14cc17289ca2e6987076e0849cdddb16b58dac5664e95c3ac86c\" returns successfully" Sep 13 00:06:24.453815 kubelet[2556]: I0913 00:06:24.453715 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hrj2c" podStartSLOduration=20.453661827 podStartE2EDuration="20.453661827s" podCreationTimestamp="2025-09-13 00:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:24.452760283 +0000 UTC m=+25.246134730" watchObservedRunningTime="2025-09-13 00:06:24.453661827 +0000 UTC m=+25.247036233" Sep 13 00:08:23.673608 systemd[1]: Started sshd@7-157.180.30.217:22-147.75.109.163:36046.service - OpenSSH per-connection server daemon (147.75.109.163:36046). Sep 13 00:08:24.771064 sshd[3958]: Accepted publickey for core from 147.75.109.163 port 36046 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:24.774527 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:24.783705 systemd-logind[1471]: New session 8 of user core. Sep 13 00:08:24.788901 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:08:26.309082 sshd[3958]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:26.313431 systemd[1]: sshd@7-157.180.30.217:22-147.75.109.163:36046.service: Deactivated successfully. Sep 13 00:08:26.317345 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:08:26.322061 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:08:26.325662 systemd-logind[1471]: Removed session 8. Sep 13 00:08:31.455916 systemd[1]: Started sshd@8-157.180.30.217:22-147.75.109.163:35714.service - OpenSSH per-connection server daemon (147.75.109.163:35714). Sep 13 00:08:32.425947 sshd[3971]: Accepted publickey for core from 147.75.109.163 port 35714 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:32.427632 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:32.432858 systemd-logind[1471]: New session 9 of user core. Sep 13 00:08:32.439793 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:08:33.186096 sshd[3971]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:33.188778 systemd[1]: sshd@8-157.180.30.217:22-147.75.109.163:35714.service: Deactivated successfully. Sep 13 00:08:33.190267 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:08:33.191960 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:08:33.193307 systemd-logind[1471]: Removed session 9. Sep 13 00:08:38.351767 systemd[1]: Started sshd@9-157.180.30.217:22-147.75.109.163:35716.service - OpenSSH per-connection server daemon (147.75.109.163:35716). Sep 13 00:08:39.322264 sshd[3987]: Accepted publickey for core from 147.75.109.163 port 35716 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:39.323615 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:39.327983 systemd-logind[1471]: New session 10 of user core. Sep 13 00:08:39.338811 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:08:40.053073 sshd[3987]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:40.056270 systemd[1]: sshd@9-157.180.30.217:22-147.75.109.163:35716.service: Deactivated successfully. Sep 13 00:08:40.058002 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:08:40.058846 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:08:40.059967 systemd-logind[1471]: Removed session 10. Sep 13 00:08:40.254009 systemd[1]: Started sshd@10-157.180.30.217:22-147.75.109.163:45114.service - OpenSSH per-connection server daemon (147.75.109.163:45114). Sep 13 00:08:41.323720 sshd[4001]: Accepted publickey for core from 147.75.109.163 port 45114 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:41.325195 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:41.329859 systemd-logind[1471]: New session 11 of user core. Sep 13 00:08:41.338864 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:08:42.167237 sshd[4001]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:42.170765 systemd[1]: sshd@10-157.180.30.217:22-147.75.109.163:45114.service: Deactivated successfully. Sep 13 00:08:42.172608 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:08:42.173637 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:08:42.176512 systemd-logind[1471]: Removed session 11. Sep 13 00:08:42.315559 systemd[1]: Started sshd@11-157.180.30.217:22-147.75.109.163:45126.service - OpenSSH per-connection server daemon (147.75.109.163:45126). Sep 13 00:08:43.285265 sshd[4012]: Accepted publickey for core from 147.75.109.163 port 45126 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:43.288549 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:43.292725 systemd-logind[1471]: New session 12 of user core. Sep 13 00:08:43.296858 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:08:44.017956 sshd[4012]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:44.021344 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:08:44.022052 systemd[1]: sshd@11-157.180.30.217:22-147.75.109.163:45126.service: Deactivated successfully. Sep 13 00:08:44.024120 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:08:44.025142 systemd-logind[1471]: Removed session 12. Sep 13 00:08:49.232136 systemd[1]: Started sshd@12-157.180.30.217:22-147.75.109.163:45134.service - OpenSSH per-connection server daemon (147.75.109.163:45134). Sep 13 00:08:50.312346 sshd[4025]: Accepted publickey for core from 147.75.109.163 port 45134 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:50.314440 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:50.322145 systemd-logind[1471]: New session 13 of user core. Sep 13 00:08:50.328927 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:08:51.108243 sshd[4025]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:51.111925 systemd[1]: sshd@12-157.180.30.217:22-147.75.109.163:45134.service: Deactivated successfully. Sep 13 00:08:51.114209 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:51.115121 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:51.116544 systemd-logind[1471]: Removed session 13. Sep 13 00:08:51.291095 systemd[1]: Started sshd@13-157.180.30.217:22-147.75.109.163:48578.service - OpenSSH per-connection server daemon (147.75.109.163:48578). Sep 13 00:08:52.364021 sshd[4038]: Accepted publickey for core from 147.75.109.163 port 48578 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:52.365303 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:52.369605 systemd-logind[1471]: New session 14 of user core. Sep 13 00:08:52.377822 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:08:53.361621 sshd[4038]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:53.368471 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:53.368835 systemd[1]: sshd@13-157.180.30.217:22-147.75.109.163:48578.service: Deactivated successfully. Sep 13 00:08:53.370697 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:53.371979 systemd-logind[1471]: Removed session 14. Sep 13 00:08:53.510202 systemd[1]: Started sshd@14-157.180.30.217:22-147.75.109.163:48586.service - OpenSSH per-connection server daemon (147.75.109.163:48586). Sep 13 00:08:54.487453 sshd[4049]: Accepted publickey for core from 147.75.109.163 port 48586 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:54.490129 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:54.495392 systemd-logind[1471]: New session 15 of user core. Sep 13 00:08:54.503803 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:08:55.764854 sshd[4049]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:55.768239 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:55.768751 systemd[1]: sshd@14-157.180.30.217:22-147.75.109.163:48586.service: Deactivated successfully. Sep 13 00:08:55.770569 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:55.771632 systemd-logind[1471]: Removed session 15. Sep 13 00:08:55.930634 systemd[1]: Started sshd@15-157.180.30.217:22-147.75.109.163:48588.service - OpenSSH per-connection server daemon (147.75.109.163:48588). Sep 13 00:08:56.904485 sshd[4067]: Accepted publickey for core from 147.75.109.163 port 48588 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:56.905804 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:56.911176 systemd-logind[1471]: New session 16 of user core. Sep 13 00:08:56.917862 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:08:57.782010 sshd[4067]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.784235 systemd[1]: sshd@15-157.180.30.217:22-147.75.109.163:48588.service: Deactivated successfully. Sep 13 00:08:57.785965 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:08:57.787154 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:08:57.788341 systemd-logind[1471]: Removed session 16. Sep 13 00:08:57.947879 systemd[1]: Started sshd@16-157.180.30.217:22-147.75.109.163:48594.service - OpenSSH per-connection server daemon (147.75.109.163:48594). Sep 13 00:08:58.912706 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 48594 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:08:58.914135 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:58.919821 systemd-logind[1471]: New session 17 of user core. Sep 13 00:08:58.921822 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:08:59.652716 sshd[4077]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:59.656745 systemd[1]: sshd@16-157.180.30.217:22-147.75.109.163:48594.service: Deactivated successfully. Sep 13 00:08:59.659146 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:08:59.660109 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:08:59.661907 systemd-logind[1471]: Removed session 17. Sep 13 00:09:04.862977 systemd[1]: Started sshd@17-157.180.30.217:22-147.75.109.163:44658.service - OpenSSH per-connection server daemon (147.75.109.163:44658). Sep 13 00:09:05.942972 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 44658 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:05.945039 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:05.954774 systemd-logind[1471]: New session 18 of user core. Sep 13 00:09:05.958884 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:09:06.784592 sshd[4094]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:06.789384 systemd[1]: sshd@17-157.180.30.217:22-147.75.109.163:44658.service: Deactivated successfully. Sep 13 00:09:06.793132 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:09:06.794543 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:09:06.797182 systemd-logind[1471]: Removed session 18. Sep 13 00:09:11.945087 systemd[1]: Started sshd@18-157.180.30.217:22-147.75.109.163:33672.service - OpenSSH per-connection server daemon (147.75.109.163:33672). Sep 13 00:09:12.925849 sshd[4109]: Accepted publickey for core from 147.75.109.163 port 33672 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:12.927359 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:12.931842 systemd-logind[1471]: New session 19 of user core. Sep 13 00:09:12.937821 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:09:13.657840 sshd[4109]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:13.660555 systemd[1]: sshd@18-157.180.30.217:22-147.75.109.163:33672.service: Deactivated successfully. Sep 13 00:09:13.662391 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:09:13.663529 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:09:13.664613 systemd-logind[1471]: Removed session 19. Sep 13 00:09:13.858111 systemd[1]: Started sshd@19-157.180.30.217:22-147.75.109.163:33682.service - OpenSSH per-connection server daemon (147.75.109.163:33682). Sep 13 00:09:14.929069 sshd[4122]: Accepted publickey for core from 147.75.109.163 port 33682 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:14.930372 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:14.934759 systemd-logind[1471]: New session 20 of user core. Sep 13 00:09:14.937801 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:09:16.861445 kubelet[2556]: I0913 00:09:16.860439 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xlqvb" podStartSLOduration=192.860423163 podStartE2EDuration="3m12.860423163s" podCreationTimestamp="2025-09-13 00:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:24.499493209 +0000 UTC m=+25.292867616" watchObservedRunningTime="2025-09-13 00:09:16.860423163 +0000 UTC m=+197.653797540" Sep 13 00:09:16.897715 containerd[1490]: time="2025-09-13T00:09:16.896937449Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:16.914580 containerd[1490]: time="2025-09-13T00:09:16.914524190Z" level=info msg="StopContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" with timeout 2 (s)" Sep 13 00:09:16.914699 containerd[1490]: time="2025-09-13T00:09:16.914649355Z" level=info msg="StopContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" with timeout 30 (s)" Sep 13 00:09:16.915067 containerd[1490]: time="2025-09-13T00:09:16.915038305Z" level=info msg="Stop container \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" with signal terminated" Sep 13 00:09:16.915305 containerd[1490]: time="2025-09-13T00:09:16.915041471Z" level=info msg="Stop container \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" with signal terminated" Sep 13 00:09:16.922034 systemd-networkd[1395]: lxc_health: Link DOWN Sep 13 00:09:16.922040 systemd-networkd[1395]: lxc_health: Lost carrier Sep 13 00:09:16.934992 systemd[1]: cri-containerd-2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3.scope: Deactivated successfully. Sep 13 00:09:16.937162 systemd[1]: cri-containerd-25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9.scope: Deactivated successfully. Sep 13 00:09:16.937347 systemd[1]: cri-containerd-25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9.scope: Consumed 6.511s CPU time. Sep 13 00:09:16.958890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3-rootfs.mount: Deactivated successfully. Sep 13 00:09:16.961775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9-rootfs.mount: Deactivated successfully. Sep 13 00:09:16.965922 containerd[1490]: time="2025-09-13T00:09:16.965754748Z" level=info msg="shim disconnected" id=25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9 namespace=k8s.io Sep 13 00:09:16.965922 containerd[1490]: time="2025-09-13T00:09:16.965799051Z" level=warning msg="cleaning up after shim disconnected" id=25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9 namespace=k8s.io Sep 13 00:09:16.965922 containerd[1490]: time="2025-09-13T00:09:16.965806635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:16.966812 containerd[1490]: time="2025-09-13T00:09:16.966657954Z" level=info msg="shim disconnected" id=2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3 namespace=k8s.io Sep 13 00:09:16.966812 containerd[1490]: time="2025-09-13T00:09:16.966739447Z" level=warning msg="cleaning up after shim disconnected" id=2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3 namespace=k8s.io Sep 13 00:09:16.966812 containerd[1490]: time="2025-09-13T00:09:16.966767680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:16.979113 containerd[1490]: time="2025-09-13T00:09:16.979048106Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:16.982029 containerd[1490]: time="2025-09-13T00:09:16.982000170Z" level=info msg="StopContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" returns successfully" Sep 13 00:09:16.982541 containerd[1490]: time="2025-09-13T00:09:16.982474752Z" level=info msg="StopContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" returns successfully" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984236819Z" level=info msg="StopPodSandbox for \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984266384Z" level=info msg="Container to stop \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984277416Z" level=info msg="Container to stop \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984285591Z" level=info msg="Container to stop \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984292604Z" level=info msg="Container to stop \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984300759Z" level=info msg="Container to stop \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984313212Z" level=info msg="StopPodSandbox for \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\"" Sep 13 00:09:16.984414 containerd[1490]: time="2025-09-13T00:09:16.984336937Z" level=info msg="Container to stop \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:09:16.988493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c-shm.mount: Deactivated successfully. Sep 13 00:09:16.988579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98-shm.mount: Deactivated successfully. Sep 13 00:09:16.990966 systemd[1]: cri-containerd-87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c.scope: Deactivated successfully. Sep 13 00:09:16.993078 systemd[1]: cri-containerd-836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98.scope: Deactivated successfully. Sep 13 00:09:17.020393 containerd[1490]: time="2025-09-13T00:09:17.020306091Z" level=info msg="shim disconnected" id=836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98 namespace=k8s.io Sep 13 00:09:17.020393 containerd[1490]: time="2025-09-13T00:09:17.020349343Z" level=warning msg="cleaning up after shim disconnected" id=836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98 namespace=k8s.io Sep 13 00:09:17.020710 containerd[1490]: time="2025-09-13T00:09:17.020504573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:17.021320 containerd[1490]: time="2025-09-13T00:09:17.021281863Z" level=info msg="shim disconnected" id=87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c namespace=k8s.io Sep 13 00:09:17.021519 containerd[1490]: time="2025-09-13T00:09:17.021316719Z" level=warning msg="cleaning up after shim disconnected" id=87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c namespace=k8s.io Sep 13 00:09:17.021519 containerd[1490]: time="2025-09-13T00:09:17.021514720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:17.032271 containerd[1490]: time="2025-09-13T00:09:17.032221763Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:17.039120 containerd[1490]: time="2025-09-13T00:09:17.038999579Z" level=info msg="TearDown network for sandbox \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\" successfully" Sep 13 00:09:17.039120 containerd[1490]: time="2025-09-13T00:09:17.039022683Z" level=info msg="StopPodSandbox for \"87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c\" returns successfully" Sep 13 00:09:17.041521 containerd[1490]: time="2025-09-13T00:09:17.041465088Z" level=info msg="TearDown network for sandbox \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" successfully" Sep 13 00:09:17.041521 containerd[1490]: time="2025-09-13T00:09:17.041486378Z" level=info msg="StopPodSandbox for \"836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98\" returns successfully" Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.156896 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6644efb3-30af-4b55-b4ab-57f748061b1e-clustermesh-secrets\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.156938 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jmr\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-kube-api-access-s5jmr\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.156960 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-lib-modules\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.156988 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-xtables-lock\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.157002 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-cgroup\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157027 kubelet[2556]: I0913 00:09:17.157014 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-bpf-maps\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157027 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-kernel\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157042 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8pf\" (UniqueName: \"kubernetes.io/projected/38dde979-faad-4dc0-9dd0-070f6a4bbf46-kube-api-access-kf8pf\") pod \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\" (UID: \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157065 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38dde979-faad-4dc0-9dd0-070f6a4bbf46-cilium-config-path\") pod \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\" (UID: \"38dde979-faad-4dc0-9dd0-070f6a4bbf46\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157080 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-hubble-tls\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157095 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cni-path\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157303 kubelet[2556]: I0913 00:09:17.157109 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-config-path\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157427 kubelet[2556]: I0913 00:09:17.157122 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-etc-cni-netd\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157427 kubelet[2556]: I0913 00:09:17.157136 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-hostproc\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157427 kubelet[2556]: I0913 00:09:17.157163 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-net\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.157427 kubelet[2556]: I0913 00:09:17.157175 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-run\") pod \"6644efb3-30af-4b55-b4ab-57f748061b1e\" (UID: \"6644efb3-30af-4b55-b4ab-57f748061b1e\") " Sep 13 00:09:17.166116 kubelet[2556]: I0913 00:09:17.165593 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-kube-api-access-s5jmr" (OuterVolumeSpecName: "kube-api-access-s5jmr") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "kube-api-access-s5jmr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:09:17.166116 kubelet[2556]: I0913 00:09:17.165648 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.166116 kubelet[2556]: I0913 00:09:17.165710 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.166116 kubelet[2556]: I0913 00:09:17.165725 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.166116 kubelet[2556]: I0913 00:09:17.165763 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.166301 kubelet[2556]: I0913 00:09:17.165783 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.166301 kubelet[2556]: I0913 00:09:17.165868 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dde979-faad-4dc0-9dd0-070f6a4bbf46-kube-api-access-kf8pf" (OuterVolumeSpecName: "kube-api-access-kf8pf") pod "38dde979-faad-4dc0-9dd0-070f6a4bbf46" (UID: "38dde979-faad-4dc0-9dd0-070f6a4bbf46"). InnerVolumeSpecName "kube-api-access-kf8pf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:09:17.167744 kubelet[2556]: I0913 00:09:17.164059 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.167744 kubelet[2556]: I0913 00:09:17.166833 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.167744 kubelet[2556]: I0913 00:09:17.166853 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-hostproc" (OuterVolumeSpecName: "hostproc") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.167744 kubelet[2556]: I0913 00:09:17.166867 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.168289 kubelet[2556]: I0913 00:09:17.168269 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:09:17.168391 kubelet[2556]: I0913 00:09:17.168378 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cni-path" (OuterVolumeSpecName: "cni-path") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:09:17.168794 kubelet[2556]: I0913 00:09:17.168756 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6644efb3-30af-4b55-b4ab-57f748061b1e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:09:17.169028 kubelet[2556]: I0913 00:09:17.169005 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6644efb3-30af-4b55-b4ab-57f748061b1e" (UID: "6644efb3-30af-4b55-b4ab-57f748061b1e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:09:17.170425 kubelet[2556]: I0913 00:09:17.170374 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38dde979-faad-4dc0-9dd0-070f6a4bbf46-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38dde979-faad-4dc0-9dd0-070f6a4bbf46" (UID: "38dde979-faad-4dc0-9dd0-070f6a4bbf46"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:09:17.259559 kubelet[2556]: I0913 00:09:17.259486 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-config-path\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259559 kubelet[2556]: I0913 00:09:17.259534 2556 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-etc-cni-netd\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259559 kubelet[2556]: I0913 00:09:17.259558 2556 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-hostproc\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259559 kubelet[2556]: I0913 00:09:17.259575 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-net\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259592 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-run\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259602 2556 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6644efb3-30af-4b55-b4ab-57f748061b1e-clustermesh-secrets\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259610 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5jmr\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-kube-api-access-s5jmr\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259621 2556 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-lib-modules\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259629 2556 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-xtables-lock\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259636 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cilium-cgroup\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259644 2556 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-bpf-maps\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.259875 kubelet[2556]: I0913 00:09:17.259652 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.260046 kubelet[2556]: I0913 00:09:17.259660 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kf8pf\" (UniqueName: \"kubernetes.io/projected/38dde979-faad-4dc0-9dd0-070f6a4bbf46-kube-api-access-kf8pf\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.260046 kubelet[2556]: I0913 00:09:17.259688 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38dde979-faad-4dc0-9dd0-070f6a4bbf46-cilium-config-path\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.260046 kubelet[2556]: I0913 00:09:17.259698 2556 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6644efb3-30af-4b55-b4ab-57f748061b1e-hubble-tls\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.260046 kubelet[2556]: I0913 00:09:17.259707 2556 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6644efb3-30af-4b55-b4ab-57f748061b1e-cni-path\") on node \"ci-4081-3-5-n-c4418ce715\" DevicePath \"\"" Sep 13 00:09:17.310986 systemd[1]: Removed slice kubepods-besteffort-pod38dde979_faad_4dc0_9dd0_070f6a4bbf46.slice - libcontainer container kubepods-besteffort-pod38dde979_faad_4dc0_9dd0_070f6a4bbf46.slice. Sep 13 00:09:17.312282 systemd[1]: Removed slice kubepods-burstable-pod6644efb3_30af_4b55_b4ab_57f748061b1e.slice - libcontainer container kubepods-burstable-pod6644efb3_30af_4b55_b4ab_57f748061b1e.slice. Sep 13 00:09:17.312477 systemd[1]: kubepods-burstable-pod6644efb3_30af_4b55_b4ab_57f748061b1e.slice: Consumed 6.581s CPU time. Sep 13 00:09:17.798067 kubelet[2556]: I0913 00:09:17.798029 2556 scope.go:117] "RemoveContainer" containerID="25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9" Sep 13 00:09:17.824623 containerd[1490]: time="2025-09-13T00:09:17.824065466Z" level=info msg="RemoveContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\"" Sep 13 00:09:17.830155 containerd[1490]: time="2025-09-13T00:09:17.829585299Z" level=info msg="RemoveContainer for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" returns successfully" Sep 13 00:09:17.834571 kubelet[2556]: I0913 00:09:17.833332 2556 scope.go:117] "RemoveContainer" containerID="bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229" Sep 13 00:09:17.836748 containerd[1490]: time="2025-09-13T00:09:17.836724914Z" level=info msg="RemoveContainer for \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\"" Sep 13 00:09:17.840441 containerd[1490]: time="2025-09-13T00:09:17.840390346Z" level=info msg="RemoveContainer for \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\" returns successfully" Sep 13 00:09:17.840862 kubelet[2556]: I0913 00:09:17.840830 2556 scope.go:117] "RemoveContainer" containerID="c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a" Sep 13 00:09:17.842834 containerd[1490]: time="2025-09-13T00:09:17.842806664Z" level=info msg="RemoveContainer for \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\"" Sep 13 00:09:17.845433 containerd[1490]: time="2025-09-13T00:09:17.845346051Z" level=info msg="RemoveContainer for \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\" returns successfully" Sep 13 00:09:17.845533 kubelet[2556]: I0913 00:09:17.845487 2556 scope.go:117] "RemoveContainer" containerID="66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f" Sep 13 00:09:17.847374 containerd[1490]: time="2025-09-13T00:09:17.847164135Z" level=info msg="RemoveContainer for \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\"" Sep 13 00:09:17.849910 containerd[1490]: time="2025-09-13T00:09:17.849880705Z" level=info msg="RemoveContainer for \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\" returns successfully" Sep 13 00:09:17.851272 kubelet[2556]: I0913 00:09:17.850950 2556 scope.go:117] "RemoveContainer" containerID="85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b" Sep 13 00:09:17.852117 containerd[1490]: time="2025-09-13T00:09:17.852083602Z" level=info msg="RemoveContainer for \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\"" Sep 13 00:09:17.858999 containerd[1490]: time="2025-09-13T00:09:17.858967225Z" level=info msg="RemoveContainer for \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\" returns successfully" Sep 13 00:09:17.859265 kubelet[2556]: I0913 00:09:17.859224 2556 scope.go:117] "RemoveContainer" containerID="25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9" Sep 13 00:09:17.867269 containerd[1490]: time="2025-09-13T00:09:17.861124966Z" level=error msg="ContainerStatus for \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\": not found" Sep 13 00:09:17.870838 kubelet[2556]: E0913 00:09:17.870743 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\": not found" containerID="25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9" Sep 13 00:09:17.871677 kubelet[2556]: I0913 00:09:17.871604 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9"} err="failed to get container status \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"25ec824cbc56306113eb1c54c758f2690cceeacdf0e7cc48e843aeb464c0f7a9\": not found" Sep 13 00:09:17.871739 kubelet[2556]: I0913 00:09:17.871666 2556 scope.go:117] "RemoveContainer" containerID="bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229" Sep 13 00:09:17.872776 containerd[1490]: time="2025-09-13T00:09:17.871902853Z" level=error msg="ContainerStatus for \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\": not found" Sep 13 00:09:17.872776 containerd[1490]: time="2025-09-13T00:09:17.872197315Z" level=error msg="ContainerStatus for \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\": not found" Sep 13 00:09:17.872776 containerd[1490]: time="2025-09-13T00:09:17.872450521Z" level=error msg="ContainerStatus for \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\": not found" Sep 13 00:09:17.872776 containerd[1490]: time="2025-09-13T00:09:17.872697164Z" level=error msg="ContainerStatus for \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\": not found" Sep 13 00:09:17.872878 kubelet[2556]: E0913 00:09:17.872024 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\": not found" containerID="bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229" Sep 13 00:09:17.872878 kubelet[2556]: I0913 00:09:17.872047 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229"} err="failed to get container status \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd50569ad16afb463fba7d0ec5fa6bc954cef60873505f5ec86a48a6e7f31229\": not found" Sep 13 00:09:17.872878 kubelet[2556]: I0913 00:09:17.872061 2556 scope.go:117] "RemoveContainer" containerID="c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a" Sep 13 00:09:17.872878 kubelet[2556]: E0913 00:09:17.872286 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\": not found" containerID="c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a" Sep 13 00:09:17.872878 kubelet[2556]: I0913 00:09:17.872306 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a"} err="failed to get container status \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c528b8979cbdfbaed9f01216c2ea975283f84de5cba09b70386255ae15f10a2a\": not found" Sep 13 00:09:17.872878 kubelet[2556]: I0913 00:09:17.872320 2556 scope.go:117] "RemoveContainer" containerID="66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f" Sep 13 00:09:17.873001 kubelet[2556]: E0913 00:09:17.872537 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\": not found" containerID="66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f" Sep 13 00:09:17.873001 kubelet[2556]: I0913 00:09:17.872553 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f"} err="failed to get container status \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\": rpc error: code = NotFound desc = an error occurred when try to find container \"66940d31a34ac9b1cc740da139912c4757c44e58e0f1053bc989851a427ce11f\": not found" Sep 13 00:09:17.873001 kubelet[2556]: I0913 00:09:17.872566 2556 scope.go:117] "RemoveContainer" containerID="85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b" Sep 13 00:09:17.873001 kubelet[2556]: E0913 00:09:17.872794 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\": not found" containerID="85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b" Sep 13 00:09:17.873001 kubelet[2556]: I0913 00:09:17.872813 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b"} err="failed to get container status \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"85231f47e3267ca7dec4e934a739809d9b936fdc8fe4ffba5b8c2157346afd4b\": not found" Sep 13 00:09:17.873001 kubelet[2556]: I0913 00:09:17.872851 2556 scope.go:117] "RemoveContainer" containerID="2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3" Sep 13 00:09:17.874730 containerd[1490]: time="2025-09-13T00:09:17.873911223Z" level=info msg="RemoveContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\"" Sep 13 00:09:17.876784 containerd[1490]: time="2025-09-13T00:09:17.876746918Z" level=info msg="RemoveContainer for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" returns successfully" Sep 13 00:09:17.876875 kubelet[2556]: I0913 00:09:17.876861 2556 scope.go:117] "RemoveContainer" containerID="2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3" Sep 13 00:09:17.877078 containerd[1490]: time="2025-09-13T00:09:17.876998309Z" level=error msg="ContainerStatus for \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\": not found" Sep 13 00:09:17.878793 kubelet[2556]: E0913 00:09:17.878765 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\": not found" containerID="2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3" Sep 13 00:09:17.878882 kubelet[2556]: I0913 00:09:17.878822 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3"} err="failed to get container status \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dc7bd62c35093079680f3c8647998b6856dcb994a9dc903e53b3896ae827ce3\": not found" Sep 13 00:09:17.880025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c82fdb262ad39b95eefa65f5314d2f4728a4eb894f09b6359efba29f095a8c-rootfs.mount: Deactivated successfully. Sep 13 00:09:17.880173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-836d8372e2d48ea74d6207065e5b587c810b2dd62854f4019a0ef38828a13f98-rootfs.mount: Deactivated successfully. Sep 13 00:09:17.880264 systemd[1]: var-lib-kubelet-pods-38dde979\x2dfaad\x2d4dc0\x2d9dd0\x2d070f6a4bbf46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkf8pf.mount: Deactivated successfully. Sep 13 00:09:17.880360 systemd[1]: var-lib-kubelet-pods-6644efb3\x2d30af\x2d4b55\x2db4ab\x2d57f748061b1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5jmr.mount: Deactivated successfully. Sep 13 00:09:17.880455 systemd[1]: var-lib-kubelet-pods-6644efb3\x2d30af\x2d4b55\x2db4ab\x2d57f748061b1e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:09:17.880526 systemd[1]: var-lib-kubelet-pods-6644efb3\x2d30af\x2d4b55\x2db4ab\x2d57f748061b1e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:09:18.990632 sshd[4122]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:18.993790 systemd[1]: sshd@19-157.180.30.217:22-147.75.109.163:33682.service: Deactivated successfully. Sep 13 00:09:18.995797 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:09:18.997106 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:09:18.998402 systemd-logind[1471]: Removed session 20. Sep 13 00:09:19.139294 systemd[1]: Started sshd@20-157.180.30.217:22-147.75.109.163:33692.service - OpenSSH per-connection server daemon (147.75.109.163:33692). Sep 13 00:09:19.311315 kubelet[2556]: I0913 00:09:19.310175 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38dde979-faad-4dc0-9dd0-070f6a4bbf46" path="/var/lib/kubelet/pods/38dde979-faad-4dc0-9dd0-070f6a4bbf46/volumes" Sep 13 00:09:19.311315 kubelet[2556]: I0913 00:09:19.310649 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6644efb3-30af-4b55-b4ab-57f748061b1e" path="/var/lib/kubelet/pods/6644efb3-30af-4b55-b4ab-57f748061b1e/volumes" Sep 13 00:09:19.401640 kubelet[2556]: E0913 00:09:19.401581 2556 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:09:20.120363 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 33692 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:20.121755 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:20.125721 systemd-logind[1471]: New session 21 of user core. Sep 13 00:09:20.135857 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:09:21.359557 systemd[1]: Created slice kubepods-burstable-pod2f6c0174_3efc_498e_a67f_1cbe9448eb20.slice - libcontainer container kubepods-burstable-pod2f6c0174_3efc_498e_a67f_1cbe9448eb20.slice. Sep 13 00:09:21.385363 kubelet[2556]: I0913 00:09:21.385303 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-bpf-maps\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388721 kubelet[2556]: I0913 00:09:21.387039 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kph2\" (UniqueName: \"kubernetes.io/projected/2f6c0174-3efc-498e-a67f-1cbe9448eb20-kube-api-access-5kph2\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388721 kubelet[2556]: I0913 00:09:21.387130 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f6c0174-3efc-498e-a67f-1cbe9448eb20-clustermesh-secrets\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388721 kubelet[2556]: I0913 00:09:21.387152 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f6c0174-3efc-498e-a67f-1cbe9448eb20-cilium-ipsec-secrets\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388721 kubelet[2556]: I0913 00:09:21.387167 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-cilium-cgroup\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388721 kubelet[2556]: I0913 00:09:21.387184 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f6c0174-3efc-498e-a67f-1cbe9448eb20-cilium-config-path\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387200 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-lib-modules\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387212 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-host-proc-sys-kernel\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387228 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-hostproc\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387241 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-cni-path\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387254 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-etc-cni-netd\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.388986 kubelet[2556]: I0913 00:09:21.387266 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-xtables-lock\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.389133 kubelet[2556]: I0913 00:09:21.387277 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-host-proc-sys-net\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.389133 kubelet[2556]: I0913 00:09:21.387289 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f6c0174-3efc-498e-a67f-1cbe9448eb20-hubble-tls\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.389133 kubelet[2556]: I0913 00:09:21.387303 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f6c0174-3efc-498e-a67f-1cbe9448eb20-cilium-run\") pod \"cilium-t694b\" (UID: \"2f6c0174-3efc-498e-a67f-1cbe9448eb20\") " pod="kube-system/cilium-t694b" Sep 13 00:09:21.485218 sshd[4278]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:21.489033 systemd[1]: sshd@20-157.180.30.217:22-147.75.109.163:33692.service: Deactivated successfully. Sep 13 00:09:21.492724 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:09:21.502589 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:09:21.506078 systemd-logind[1471]: Removed session 21. Sep 13 00:09:21.670331 containerd[1490]: time="2025-09-13T00:09:21.670208587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t694b,Uid:2f6c0174-3efc-498e-a67f-1cbe9448eb20,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:21.692934 containerd[1490]: time="2025-09-13T00:09:21.692591572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:21.692934 containerd[1490]: time="2025-09-13T00:09:21.692637588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:21.692934 containerd[1490]: time="2025-09-13T00:09:21.692661583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:21.693558 containerd[1490]: time="2025-09-13T00:09:21.692895281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:21.698389 systemd[1]: Started sshd@21-157.180.30.217:22-147.75.109.163:34550.service - OpenSSH per-connection server daemon (147.75.109.163:34550). Sep 13 00:09:21.711826 systemd[1]: Started cri-containerd-721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f.scope - libcontainer container 721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f. Sep 13 00:09:21.734623 containerd[1490]: time="2025-09-13T00:09:21.734561600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t694b,Uid:2f6c0174-3efc-498e-a67f-1cbe9448eb20,Namespace:kube-system,Attempt:0,} returns sandbox id \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\"" Sep 13 00:09:21.747951 containerd[1490]: time="2025-09-13T00:09:21.747827127Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:09:21.757353 containerd[1490]: time="2025-09-13T00:09:21.757315992Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1\"" Sep 13 00:09:21.758047 containerd[1490]: time="2025-09-13T00:09:21.757928202Z" level=info msg="StartContainer for \"6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1\"" Sep 13 00:09:21.785818 systemd[1]: Started cri-containerd-6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1.scope - libcontainer container 6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1. Sep 13 00:09:21.810928 containerd[1490]: time="2025-09-13T00:09:21.810756117Z" level=info msg="StartContainer for \"6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1\" returns successfully" Sep 13 00:09:21.824578 systemd[1]: cri-containerd-6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1.scope: Deactivated successfully. Sep 13 00:09:21.860576 containerd[1490]: time="2025-09-13T00:09:21.860503387Z" level=info msg="shim disconnected" id=6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1 namespace=k8s.io Sep 13 00:09:21.860888 containerd[1490]: time="2025-09-13T00:09:21.860816565Z" level=warning msg="cleaning up after shim disconnected" id=6e9a796409e577da578408ac127db7a481e6aa69f894eebf2b7b46e137fe82c1 namespace=k8s.io Sep 13 00:09:21.860888 containerd[1490]: time="2025-09-13T00:09:21.860831003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:22.769684 sshd[4312]: Accepted publickey for core from 147.75.109.163 port 34550 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:22.771349 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:22.776025 systemd-logind[1471]: New session 22 of user core. Sep 13 00:09:22.783891 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:09:22.846610 containerd[1490]: time="2025-09-13T00:09:22.846074795Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:09:22.857179 containerd[1490]: time="2025-09-13T00:09:22.856034391Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f\"" Sep 13 00:09:22.859698 containerd[1490]: time="2025-09-13T00:09:22.857865270Z" level=info msg="StartContainer for \"8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f\"" Sep 13 00:09:22.911941 systemd[1]: Started cri-containerd-8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f.scope - libcontainer container 8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f. Sep 13 00:09:22.949515 containerd[1490]: time="2025-09-13T00:09:22.949459647Z" level=info msg="StartContainer for \"8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f\" returns successfully" Sep 13 00:09:22.958364 systemd[1]: cri-containerd-8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f.scope: Deactivated successfully. Sep 13 00:09:22.977683 containerd[1490]: time="2025-09-13T00:09:22.977587448Z" level=info msg="shim disconnected" id=8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f namespace=k8s.io Sep 13 00:09:22.977683 containerd[1490]: time="2025-09-13T00:09:22.977661878Z" level=warning msg="cleaning up after shim disconnected" id=8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f namespace=k8s.io Sep 13 00:09:22.977845 containerd[1490]: time="2025-09-13T00:09:22.977696613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:22.986611 containerd[1490]: time="2025-09-13T00:09:22.986577684Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:23.495271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8042874f78d2b24959f1a4401f43d40783211394c633f724011d3f8e6275930f-rootfs.mount: Deactivated successfully. Sep 13 00:09:23.511814 sshd[4312]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:23.515188 systemd[1]: sshd@21-157.180.30.217:22-147.75.109.163:34550.service: Deactivated successfully. Sep 13 00:09:23.517240 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:09:23.518742 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:09:23.519810 systemd-logind[1471]: Removed session 22. Sep 13 00:09:23.660618 systemd[1]: Started sshd@22-157.180.30.217:22-147.75.109.163:34560.service - OpenSSH per-connection server daemon (147.75.109.163:34560). Sep 13 00:09:23.849895 containerd[1490]: time="2025-09-13T00:09:23.849828249Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:09:23.855854 kubelet[2556]: I0913 00:09:23.855352 2556 setters.go:618] "Node became not ready" node="ci-4081-3-5-n-c4418ce715" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:09:23Z","lastTransitionTime":"2025-09-13T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:09:23.872566 containerd[1490]: time="2025-09-13T00:09:23.872485604Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195\"" Sep 13 00:09:23.874411 containerd[1490]: time="2025-09-13T00:09:23.874374492Z" level=info msg="StartContainer for \"77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195\"" Sep 13 00:09:23.927799 systemd[1]: Started cri-containerd-77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195.scope - libcontainer container 77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195. Sep 13 00:09:23.952318 containerd[1490]: time="2025-09-13T00:09:23.951997539Z" level=info msg="StartContainer for \"77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195\" returns successfully" Sep 13 00:09:23.958829 systemd[1]: cri-containerd-77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195.scope: Deactivated successfully. Sep 13 00:09:23.984119 containerd[1490]: time="2025-09-13T00:09:23.984033241Z" level=info msg="shim disconnected" id=77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195 namespace=k8s.io Sep 13 00:09:23.984119 containerd[1490]: time="2025-09-13T00:09:23.984098333Z" level=warning msg="cleaning up after shim disconnected" id=77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195 namespace=k8s.io Sep 13 00:09:23.984119 containerd[1490]: time="2025-09-13T00:09:23.984112991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:23.996215 containerd[1490]: time="2025-09-13T00:09:23.996166629Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:24.405666 kubelet[2556]: E0913 00:09:24.404965 2556 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:09:24.495214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77845e783c28bc2e77d66634a2df371f77e54e853fc7543b66a2fcc932071195-rootfs.mount: Deactivated successfully. Sep 13 00:09:24.625503 sshd[4463]: Accepted publickey for core from 147.75.109.163 port 34560 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:24.626922 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:24.632190 systemd-logind[1471]: New session 23 of user core. Sep 13 00:09:24.642855 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:09:24.851704 containerd[1490]: time="2025-09-13T00:09:24.851461911Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:09:24.874904 containerd[1490]: time="2025-09-13T00:09:24.874853016Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53\"" Sep 13 00:09:24.876325 containerd[1490]: time="2025-09-13T00:09:24.875598370Z" level=info msg="StartContainer for \"459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53\"" Sep 13 00:09:24.901864 systemd[1]: Started cri-containerd-459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53.scope - libcontainer container 459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53. Sep 13 00:09:24.928688 systemd[1]: cri-containerd-459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53.scope: Deactivated successfully. Sep 13 00:09:24.931572 containerd[1490]: time="2025-09-13T00:09:24.930815399Z" level=info msg="StartContainer for \"459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53\" returns successfully" Sep 13 00:09:24.952062 containerd[1490]: time="2025-09-13T00:09:24.952010096Z" level=info msg="shim disconnected" id=459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53 namespace=k8s.io Sep 13 00:09:24.952308 containerd[1490]: time="2025-09-13T00:09:24.952277560Z" level=warning msg="cleaning up after shim disconnected" id=459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53 namespace=k8s.io Sep 13 00:09:24.952308 containerd[1490]: time="2025-09-13T00:09:24.952297908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:25.495393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-459c1f11819a129588b1854296d118e84aadc63eac8247a8baadcb5ad53fdd53-rootfs.mount: Deactivated successfully. Sep 13 00:09:25.855443 containerd[1490]: time="2025-09-13T00:09:25.855094877Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:09:25.867442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705132623.mount: Deactivated successfully. Sep 13 00:09:25.872879 containerd[1490]: time="2025-09-13T00:09:25.872841247Z" level=info msg="CreateContainer within sandbox \"721fa00845abd7faa01810362e104878d4b3f571eb6ef55f26b2d01ec332b08f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb\"" Sep 13 00:09:25.873345 containerd[1490]: time="2025-09-13T00:09:25.873310550Z" level=info msg="StartContainer for \"dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb\"" Sep 13 00:09:25.903804 systemd[1]: Started cri-containerd-dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb.scope - libcontainer container dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb. Sep 13 00:09:25.929648 containerd[1490]: time="2025-09-13T00:09:25.929183090Z" level=info msg="StartContainer for \"dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb\" returns successfully" Sep 13 00:09:26.367702 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:09:26.883062 kubelet[2556]: I0913 00:09:26.882917 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t694b" podStartSLOduration=5.882893742 podStartE2EDuration="5.882893742s" podCreationTimestamp="2025-09-13 00:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:26.882581425 +0000 UTC m=+207.675955832" watchObservedRunningTime="2025-09-13 00:09:26.882893742 +0000 UTC m=+207.676268150" Sep 13 00:09:29.365757 systemd-networkd[1395]: lxc_health: Link UP Sep 13 00:09:29.375924 systemd-networkd[1395]: lxc_health: Gained carrier Sep 13 00:09:30.722746 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 13 00:09:31.813309 systemd[1]: run-containerd-runc-k8s.io-dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb-runc.y9ifNs.mount: Deactivated successfully. Sep 13 00:09:33.926085 systemd[1]: run-containerd-runc-k8s.io-dee53b33ab0d624b6d7d85adeeef054fb666157108eec2ced83da941296673bb-runc.ZgljE9.mount: Deactivated successfully. Sep 13 00:09:36.352339 sshd[4463]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:36.356295 systemd[1]: sshd@22-157.180.30.217:22-147.75.109.163:34560.service: Deactivated successfully. Sep 13 00:09:36.359266 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:09:36.361142 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:09:36.363372 systemd-logind[1471]: Removed session 23. Sep 13 00:09:51.526464 systemd[1]: cri-containerd-c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26.scope: Deactivated successfully. Sep 13 00:09:51.526850 systemd[1]: cri-containerd-c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26.scope: Consumed 4.284s CPU time, 23.9M memory peak, 0B memory swap peak. Sep 13 00:09:51.562267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26-rootfs.mount: Deactivated successfully. Sep 13 00:09:51.576725 containerd[1490]: time="2025-09-13T00:09:51.576580776Z" level=info msg="shim disconnected" id=c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26 namespace=k8s.io Sep 13 00:09:51.576725 containerd[1490]: time="2025-09-13T00:09:51.576653664Z" level=warning msg="cleaning up after shim disconnected" id=c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26 namespace=k8s.io Sep 13 00:09:51.576725 containerd[1490]: time="2025-09-13T00:09:51.576695843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:51.788739 kubelet[2556]: E0913 00:09:51.788553 2556 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58048->10.0.0.2:2379: read: connection timed out" Sep 13 00:09:51.790634 systemd[1]: cri-containerd-fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c.scope: Deactivated successfully. Sep 13 00:09:51.791030 systemd[1]: cri-containerd-fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c.scope: Consumed 2.573s CPU time, 24.8M memory peak, 0B memory swap peak. Sep 13 00:09:51.812214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c-rootfs.mount: Deactivated successfully. Sep 13 00:09:51.817838 containerd[1490]: time="2025-09-13T00:09:51.817654380Z" level=info msg="shim disconnected" id=fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c namespace=k8s.io Sep 13 00:09:51.817838 containerd[1490]: time="2025-09-13T00:09:51.817726687Z" level=warning msg="cleaning up after shim disconnected" id=fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c namespace=k8s.io Sep 13 00:09:51.817838 containerd[1490]: time="2025-09-13T00:09:51.817735803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:51.914468 kubelet[2556]: I0913 00:09:51.914434 2556 scope.go:117] "RemoveContainer" containerID="c0d9e26904c3681ff9331c34b62d373f0753b31b1a8392456b0e0390d4cfbe26" Sep 13 00:09:51.917100 kubelet[2556]: I0913 00:09:51.916920 2556 scope.go:117] "RemoveContainer" containerID="fc93833dc1dd6f8bca1802f6f64a33ce23ba8d136cc2c1d6eeb6622e7286c99c" Sep 13 00:09:51.917530 containerd[1490]: time="2025-09-13T00:09:51.917320353Z" level=info msg="CreateContainer within sandbox \"446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:09:51.918544 containerd[1490]: time="2025-09-13T00:09:51.918510653Z" level=info msg="CreateContainer within sandbox \"66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:09:51.932309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646211127.mount: Deactivated successfully. Sep 13 00:09:51.934327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007573714.mount: Deactivated successfully. Sep 13 00:09:51.937586 containerd[1490]: time="2025-09-13T00:09:51.937553848Z" level=info msg="CreateContainer within sandbox \"446eea339663913f074994a8a27b9c483edb9c72042b31314afe7724b04bdeb9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"63d1f9b2e9fa9cb9733185b2451408191201057cd06b47fc6fa33072fff4b47a\"" Sep 13 00:09:51.938709 containerd[1490]: time="2025-09-13T00:09:51.937947529Z" level=info msg="StartContainer for \"63d1f9b2e9fa9cb9733185b2451408191201057cd06b47fc6fa33072fff4b47a\"" Sep 13 00:09:51.961979 containerd[1490]: time="2025-09-13T00:09:51.961931933Z" level=info msg="CreateContainer within sandbox \"66ebba08c5cbd44052654ae30cc484e1221f1a4c4f2ba2f006510677c36af9ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bbb6819e2f822cdd7847207c362950f3375d125a5390d0c9e634491f5b112c5a\"" Sep 13 00:09:51.962985 containerd[1490]: time="2025-09-13T00:09:51.962951943Z" level=info msg="StartContainer for \"bbb6819e2f822cdd7847207c362950f3375d125a5390d0c9e634491f5b112c5a\"" Sep 13 00:09:51.976055 systemd[1]: Started cri-containerd-63d1f9b2e9fa9cb9733185b2451408191201057cd06b47fc6fa33072fff4b47a.scope - libcontainer container 63d1f9b2e9fa9cb9733185b2451408191201057cd06b47fc6fa33072fff4b47a. Sep 13 00:09:52.008116 systemd[1]: Started cri-containerd-bbb6819e2f822cdd7847207c362950f3375d125a5390d0c9e634491f5b112c5a.scope - libcontainer container bbb6819e2f822cdd7847207c362950f3375d125a5390d0c9e634491f5b112c5a. Sep 13 00:09:52.048439 containerd[1490]: time="2025-09-13T00:09:52.048383847Z" level=info msg="StartContainer for \"63d1f9b2e9fa9cb9733185b2451408191201057cd06b47fc6fa33072fff4b47a\" returns successfully" Sep 13 00:09:52.072206 containerd[1490]: time="2025-09-13T00:09:52.072089204Z" level=info msg="StartContainer for \"bbb6819e2f822cdd7847207c362950f3375d125a5390d0c9e634491f5b112c5a\" returns successfully" Sep 13 00:09:55.753741 kubelet[2556]: E0913 00:09:55.750472 2556 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57850->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-c4418ce715.1864af03c3cfce48 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-c4418ce715,UID:b5979239783d81365616443ba1ac8384,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c4418ce715,},FirstTimestamp:2025-09-13 00:09:45.286274632 +0000 UTC m=+226.079649009,LastTimestamp:2025-09-13 00:09:45.286274632 +0000 UTC m=+226.079649009,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c4418ce715,}"