Mar 14 00:14:31.971887 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:14:31.971905 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:14:31.971914 kernel: BIOS-provided physical RAM map: Mar 14 00:14:31.971919 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:14:31.971923 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Mar 14 00:14:31.971927 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 14 00:14:31.971933 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 14 00:14:31.971937 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Mar 14 00:14:31.971941 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Mar 14 00:14:31.971946 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Mar 14 00:14:31.971950 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 14 00:14:31.971957 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 14 00:14:31.971962 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 14 00:14:31.971966 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 14 00:14:31.971972 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 14 00:14:31.971977 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:14:31.971983 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 14 00:14:31.971988 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 14 00:14:31.971993 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:14:31.971997 kernel: NX (Execute Disable) protection: active Mar 14 00:14:31.972002 kernel: APIC: Static calls initialized Mar 14 00:14:31.972006 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:14:31.972011 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Mar 14 00:14:31.972016 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 14 00:14:31.972020 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 14 00:14:31.972025 kernel: SMBIOS 3.0.0 present. Mar 14 00:14:31.972030 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 14 00:14:31.972034 kernel: Hypervisor detected: KVM Mar 14 00:14:31.972041 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:14:31.972046 kernel: kvm-clock: using sched offset of 12407972498 cycles Mar 14 00:14:31.972051 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:14:31.972056 kernel: tsc: Detected 2396.398 MHz processor Mar 14 00:14:31.972061 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:14:31.972065 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:14:31.972070 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Mar 14 00:14:31.972075 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:14:31.972080 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:14:31.972087 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 14 00:14:31.972092 kernel: Using GB pages for direct mapping Mar 14 00:14:31.972097 kernel: Secure boot disabled Mar 14 00:14:31.972104 kernel: ACPI: Early table checksum verification disabled Mar 14 00:14:31.972109 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 14 00:14:31.972114 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:14:31.972119 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972127 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972131 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 14 00:14:31.972136 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972141 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972146 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972151 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:14:31.972156 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:14:31.972164 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Mar 14 00:14:31.972169 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Mar 14 00:14:31.972173 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 14 00:14:31.972178 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Mar 14 00:14:31.972183 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Mar 14 00:14:31.972188 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Mar 14 00:14:31.972193 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Mar 14 00:14:31.972198 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Mar 14 00:14:31.972203 kernel: No NUMA configuration found Mar 14 00:14:31.972211 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Mar 14 00:14:31.972216 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Mar 14 00:14:31.972221 kernel: Zone ranges: Mar 14 00:14:31.972226 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:14:31.972231 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:14:31.972236 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:14:31.972240 kernel: Movable zone start for each node Mar 14 00:14:31.972245 kernel: Early memory node ranges Mar 14 00:14:31.972250 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:14:31.972255 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Mar 14 00:14:31.972262 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Mar 14 00:14:31.972267 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Mar 14 00:14:31.972272 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Mar 14 00:14:31.972277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Mar 14 00:14:31.972282 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:14:31.972287 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:14:31.972292 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 14 00:14:31.972297 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 14 00:14:31.972304 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Mar 14 00:14:31.972312 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:14:31.972316 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:14:31.972345 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:14:31.972361 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:14:31.972367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:14:31.972372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:14:31.972377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:14:31.972382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:14:31.972387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:14:31.972395 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:14:31.972400 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:14:31.972405 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:14:31.972410 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:14:31.972415 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 14 00:14:31.972420 kernel: Booting paravirtualized kernel on KVM Mar 14 00:14:31.972425 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:14:31.972430 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:14:31.972435 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:14:31.972442 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:14:31.972447 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:14:31.972452 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 14 00:14:31.972457 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:14:31.972463 kernel: random: crng init done Mar 14 00:14:31.972468 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:14:31.972473 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:14:31.972477 kernel: Fallback order for Node 0: 0 Mar 14 00:14:31.972485 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Mar 14 00:14:31.972490 kernel: Policy zone: Normal Mar 14 00:14:31.972495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:14:31.972500 kernel: software IO TLB: area num 2. Mar 14 00:14:31.972505 kernel: Memory: 3826704K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 264260K reserved, 0K cma-reserved) Mar 14 00:14:31.972510 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:14:31.972515 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:14:31.972519 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:14:31.972524 kernel: Dynamic Preempt: voluntary Mar 14 00:14:31.972531 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:14:31.972540 kernel: rcu: RCU event tracing is enabled. Mar 14 00:14:31.972545 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:14:31.972551 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:14:31.972562 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:14:31.972570 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:14:31.972575 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:14:31.972580 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:14:31.972585 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:14:31.972590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:14:31.972595 kernel: Console: colour dummy device 80x25 Mar 14 00:14:31.972600 kernel: printk: console [tty0] enabled Mar 14 00:14:31.972608 kernel: printk: console [ttyS0] enabled Mar 14 00:14:31.972613 kernel: ACPI: Core revision 20230628 Mar 14 00:14:31.972618 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:14:31.972624 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:14:31.972629 kernel: x2apic enabled Mar 14 00:14:31.972636 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:14:31.972642 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:14:31.972647 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:14:31.972652 kernel: Calibrating delay loop (skipped) preset value.. 4792.79 BogoMIPS (lpj=2396398) Mar 14 00:14:31.972657 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:14:31.972662 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:14:31.972668 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:14:31.972673 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:14:31.972678 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 14 00:14:31.972686 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:14:31.972691 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:14:31.972696 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:14:31.972701 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Mar 14 00:14:31.972706 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:14:31.972711 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:14:31.972717 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:14:31.972722 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:14:31.972727 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:14:31.972735 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:14:31.972740 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:14:31.972745 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:14:31.972750 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:14:31.972755 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:14:31.972760 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:14:31.972766 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:14:31.972771 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:14:31.972776 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 14 00:14:31.972783 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 14 00:14:31.972789 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:14:31.972794 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:14:31.972799 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:14:31.972804 kernel: landlock: Up and running. Mar 14 00:14:31.972810 kernel: SELinux: Initializing. Mar 14 00:14:31.972815 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:14:31.972820 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:14:31.972825 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Mar 14 00:14:31.972833 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:14:31.972838 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:14:31.972843 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:14:31.972849 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 14 00:14:31.972854 kernel: ... version: 0 Mar 14 00:14:31.972859 kernel: ... bit width: 48 Mar 14 00:14:31.972864 kernel: ... generic registers: 6 Mar 14 00:14:31.972869 kernel: ... value mask: 0000ffffffffffff Mar 14 00:14:31.972874 kernel: ... max period: 00007fffffffffff Mar 14 00:14:31.972882 kernel: ... fixed-purpose events: 0 Mar 14 00:14:31.972887 kernel: ... event mask: 000000000000003f Mar 14 00:14:31.972892 kernel: signal: max sigframe size: 3376 Mar 14 00:14:31.972897 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:14:31.972903 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:14:31.972908 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:14:31.972913 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:14:31.972918 kernel: .... node #0, CPUs: #1 Mar 14 00:14:31.972923 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:14:31.972931 kernel: smpboot: Max logical packages: 1 Mar 14 00:14:31.972936 kernel: smpboot: Total of 2 processors activated (9585.59 BogoMIPS) Mar 14 00:14:31.972941 kernel: devtmpfs: initialized Mar 14 00:14:31.972946 kernel: x86/mm: Memory block size: 128MB Mar 14 00:14:31.972951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 14 00:14:31.972957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:14:31.972962 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:14:31.972967 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:14:31.972972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:14:31.972982 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:14:31.972993 kernel: audit: type=2000 audit(1773447271.044:1): state=initialized audit_enabled=0 res=1 Mar 14 00:14:31.973004 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:14:31.973013 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:14:31.973021 kernel: cpuidle: using governor menu Mar 14 00:14:31.973028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:14:31.973042 kernel: dca service started, version 1.12.1 Mar 14 00:14:31.973051 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 14 00:14:31.973059 kernel: PCI: Using configuration type 1 for base access Mar 14 00:14:31.973071 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:14:31.973077 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:14:31.973083 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:14:31.973088 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:14:31.973094 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:14:31.973099 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:14:31.973104 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:14:31.973109 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:14:31.973114 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:14:31.973122 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:14:31.973127 kernel: ACPI: Interpreter enabled Mar 14 00:14:31.973132 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:14:31.973138 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:14:31.973143 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:14:31.973148 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:14:31.973153 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:14:31.973158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:14:31.973317 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:14:31.973459 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:14:31.973558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:14:31.973564 kernel: PCI host bridge to bus 0000:00 Mar 14 00:14:31.973665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:14:31.973755 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:14:31.973843 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:14:31.973946 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 14 00:14:31.974033 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 14 00:14:31.974121 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:14:31.974208 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:14:31.974331 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:14:31.974465 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 14 00:14:31.974570 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Mar 14 00:14:31.974667 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Mar 14 00:14:31.974762 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Mar 14 00:14:31.974859 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:14:31.974956 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 14 00:14:31.975052 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:14:31.975154 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.975252 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Mar 14 00:14:31.975389 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.975490 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Mar 14 00:14:31.975594 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.975690 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Mar 14 00:14:31.975793 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.975893 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Mar 14 00:14:31.975995 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.976091 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Mar 14 00:14:31.976193 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.977375 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Mar 14 00:14:31.977521 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.977655 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Mar 14 00:14:31.977791 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.977891 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Mar 14 00:14:31.977993 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:14:31.978089 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Mar 14 00:14:31.978190 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:14:31.978284 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:14:31.979487 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:14:31.979591 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Mar 14 00:14:31.979692 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Mar 14 00:14:31.979794 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:14:31.979889 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Mar 14 00:14:31.979996 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:14:31.980101 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Mar 14 00:14:31.980200 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Mar 14 00:14:31.980300 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:14:31.980418 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:14:31.980514 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:14:31.980609 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:14:31.980716 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:14:31.980817 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Mar 14 00:14:31.980912 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:14:31.981034 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:14:31.981155 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:14:31.981258 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Mar 14 00:14:31.983405 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Mar 14 00:14:31.983515 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:14:31.983619 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:14:31.983716 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:14:31.983826 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:14:31.983927 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Mar 14 00:14:31.984054 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:14:31.984162 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:14:31.984273 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:14:31.986432 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Mar 14 00:14:31.986545 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Mar 14 00:14:31.986646 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:14:31.986742 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:14:31.986854 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:14:31.986985 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:14:31.987090 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Mar 14 00:14:31.987209 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Mar 14 00:14:31.987313 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:14:31.990667 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:14:31.990773 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:14:31.990781 kernel: acpiphp: Slot [0] registered Mar 14 00:14:31.990890 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:14:31.990993 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Mar 14 00:14:31.991098 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 14 00:14:31.991196 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:14:31.991293 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:14:31.991413 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:14:31.991510 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:14:31.991520 kernel: acpiphp: Slot [0-2] registered Mar 14 00:14:31.991636 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:14:31.991733 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:14:31.991827 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:14:31.991837 kernel: acpiphp: Slot [0-3] registered Mar 14 00:14:31.991934 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:14:31.992029 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:14:31.992123 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:14:31.992129 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:14:31.992134 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:14:31.992140 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:14:31.992145 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:14:31.992153 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:14:31.992158 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:14:31.992164 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:14:31.992169 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:14:31.992174 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:14:31.992180 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:14:31.992185 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:14:31.992190 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:14:31.992196 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:14:31.992203 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:14:31.992209 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:14:31.992214 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:14:31.992219 kernel: iommu: Default domain type: Translated Mar 14 00:14:31.992225 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:14:31.992230 kernel: efivars: Registered efivars operations Mar 14 00:14:31.992236 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:14:31.992241 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:14:31.992247 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Mar 14 00:14:31.992255 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 14 00:14:31.992260 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Mar 14 00:14:31.992265 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Mar 14 00:14:31.992411 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:14:31.992511 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:14:31.992605 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:14:31.992613 kernel: vgaarb: loaded Mar 14 00:14:31.992619 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:14:31.992624 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:14:31.992633 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:14:31.992638 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:14:31.992644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:14:31.992649 kernel: pnp: PnP ACPI init Mar 14 00:14:31.992754 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 14 00:14:31.992761 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:14:31.992766 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:14:31.992772 kernel: NET: Registered PF_INET protocol family Mar 14 00:14:31.992793 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:14:31.992801 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:14:31.992806 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:14:31.992812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:14:31.992817 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:14:31.992823 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:14:31.992828 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:14:31.992834 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:14:31.992840 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:14:31.992847 kernel: NET: Registered PF_XDP protocol family Mar 14 00:14:31.992953 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:14:31.993085 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Mar 14 00:14:31.993187 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:14:31.993284 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:14:31.993409 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:14:31.993506 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:14:31.993607 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:14:31.993705 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:14:31.993805 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Mar 14 00:14:31.993902 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:14:31.994001 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 14 00:14:31.994096 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:14:31.994192 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:14:31.994287 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 14 00:14:31.996453 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:14:31.996559 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 14 00:14:31.996657 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:14:31.996754 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:14:31.996878 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:14:31.997006 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:14:31.997105 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 14 00:14:31.997201 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:14:31.997296 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:14:31.997415 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 14 00:14:31.997516 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:14:31.997616 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Mar 14 00:14:31.997715 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:14:31.997810 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 14 00:14:31.997905 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 14 00:14:31.998000 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:14:31.998094 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:14:31.998189 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 14 00:14:31.998283 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 14 00:14:32.002521 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:14:32.002629 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:14:32.002727 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 14 00:14:32.002829 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 14 00:14:32.002925 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:14:32.003039 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:14:32.003131 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:14:32.003223 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:14:32.003312 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 14 00:14:32.003422 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 14 00:14:32.003510 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Mar 14 00:14:32.003610 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Mar 14 00:14:32.003704 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 14 00:14:32.003808 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Mar 14 00:14:32.003912 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Mar 14 00:14:32.004005 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 14 00:14:32.004103 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 14 00:14:32.004202 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Mar 14 00:14:32.004294 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 14 00:14:32.004437 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Mar 14 00:14:32.004535 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 14 00:14:32.004635 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 14 00:14:32.004727 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Mar 14 00:14:32.004819 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 14 00:14:32.004917 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 14 00:14:32.005010 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Mar 14 00:14:32.005102 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 14 00:14:32.005203 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 14 00:14:32.005295 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Mar 14 00:14:32.005420 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 14 00:14:32.005430 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:14:32.005437 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:14:32.005443 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:14:32.005449 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Mar 14 00:14:32.005458 kernel: Initialise system trusted keyrings Mar 14 00:14:32.005463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:14:32.005469 kernel: Key type asymmetric registered Mar 14 00:14:32.005474 kernel: Asymmetric key parser 'x509' registered Mar 14 00:14:32.005480 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:14:32.005486 kernel: io scheduler mq-deadline registered Mar 14 00:14:32.005491 kernel: io scheduler kyber registered Mar 14 00:14:32.005497 kernel: io scheduler bfq registered Mar 14 00:14:32.007964 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 14 00:14:32.008072 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 14 00:14:32.008175 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 14 00:14:32.008271 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 14 00:14:32.008433 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 14 00:14:32.008531 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 14 00:14:32.008626 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 14 00:14:32.008721 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 14 00:14:32.008815 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 14 00:14:32.008910 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 14 00:14:32.009010 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 14 00:14:32.009104 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 14 00:14:32.009199 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 14 00:14:32.009293 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 14 00:14:32.009563 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 14 00:14:32.009663 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 14 00:14:32.009675 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:14:32.009770 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 14 00:14:32.009867 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 14 00:14:32.009874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:14:32.009880 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 14 00:14:32.009885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:14:32.009891 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:14:32.009897 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:14:32.009903 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:14:32.009909 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:14:32.009914 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:14:32.010020 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:14:32.010111 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:14:32.010201 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:14:31 UTC (1773447271) Mar 14 00:14:32.010292 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:14:32.010298 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:14:32.010304 kernel: efifb: probing for efifb Mar 14 00:14:32.010310 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Mar 14 00:14:32.010318 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 14 00:14:32.011346 kernel: efifb: scrolling: redraw Mar 14 00:14:32.011360 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:14:32.011366 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:14:32.011372 kernel: fb0: EFI VGA frame buffer device Mar 14 00:14:32.011378 kernel: pstore: Using crash dump compression: deflate Mar 14 00:14:32.011383 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:14:32.011389 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:14:32.011395 kernel: Segment Routing with IPv6 Mar 14 00:14:32.011400 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:14:32.011410 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:14:32.011415 kernel: Key type dns_resolver registered Mar 14 00:14:32.011421 kernel: IPI shorthand broadcast: enabled Mar 14 00:14:32.011427 kernel: sched_clock: Marking stable (1421011404, 218931204)->(1697502618, -57560010) Mar 14 00:14:32.011432 kernel: registered taskstats version 1 Mar 14 00:14:32.011438 kernel: Loading compiled-in X.509 certificates Mar 14 00:14:32.011443 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:14:32.011449 kernel: Key type .fscrypt registered Mar 14 00:14:32.011454 kernel: Key type fscrypt-provisioning registered Mar 14 00:14:32.011463 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:14:32.011468 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:14:32.011474 kernel: ima: No architecture policies found Mar 14 00:14:32.011480 kernel: clk: Disabling unused clocks Mar 14 00:14:32.011485 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:14:32.011491 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:14:32.011497 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:14:32.011502 kernel: Run /init as init process Mar 14 00:14:32.011510 kernel: with arguments: Mar 14 00:14:32.011517 kernel: /init Mar 14 00:14:32.011522 kernel: with environment: Mar 14 00:14:32.011528 kernel: HOME=/ Mar 14 00:14:32.011534 kernel: TERM=linux Mar 14 00:14:32.011541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:14:32.011549 systemd[1]: Detected virtualization kvm. Mar 14 00:14:32.011556 systemd[1]: Detected architecture x86-64. Mar 14 00:14:32.011564 systemd[1]: Running in initrd. Mar 14 00:14:32.011570 systemd[1]: No hostname configured, using default hostname. Mar 14 00:14:32.011575 systemd[1]: Hostname set to . Mar 14 00:14:32.011582 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:14:32.011588 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:14:32.011594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:14:32.011602 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:14:32.011609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:14:32.011617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:14:32.011623 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:14:32.011630 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:14:32.011637 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:14:32.011643 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:14:32.011649 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:14:32.011657 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:32.011663 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:14:32.011669 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:14:32.011675 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:14:32.011681 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:14:32.011687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:14:32.011693 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:14:32.011699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:14:32.011705 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:14:32.011713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:14:32.011719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:14:32.011725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:14:32.011731 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:14:32.011736 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:14:32.011742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:14:32.011748 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:14:32.011754 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:14:32.011760 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:14:32.011768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:14:32.011793 systemd-journald[187]: Collecting audit messages is disabled. Mar 14 00:14:32.011808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:32.011814 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:14:32.011823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:14:32.011829 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:14:32.011835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:14:32.011842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:32.011850 systemd-journald[187]: Journal started Mar 14 00:14:32.011863 systemd-journald[187]: Runtime Journal (/run/log/journal/d543067c4661475b97c4cc6ab451550b) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:14:32.016258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:14:32.005568 systemd-modules-load[189]: Inserted module 'overlay' Mar 14 00:14:32.020347 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:14:32.020659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:14:32.030039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:14:32.035931 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:14:32.034680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:14:32.044568 kernel: Bridge firewalling registered Mar 14 00:14:32.041021 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 14 00:14:32.045270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:14:32.047697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:32.049042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:14:32.054464 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:14:32.055475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:14:32.057968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:32.063784 dracut-cmdline[218]: dracut-dracut-053 Mar 14 00:14:32.066154 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:14:32.073213 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:32.080808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:14:32.102049 systemd-resolved[249]: Positive Trust Anchors: Mar 14 00:14:32.102060 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:14:32.102083 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:14:32.106059 systemd-resolved[249]: Defaulting to hostname 'linux'. Mar 14 00:14:32.107025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:14:32.107920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:32.127379 kernel: SCSI subsystem initialized Mar 14 00:14:32.136344 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:14:32.145344 kernel: iscsi: registered transport (tcp) Mar 14 00:14:32.162229 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:14:32.162276 kernel: QLogic iSCSI HBA Driver Mar 14 00:14:32.196919 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:14:32.203451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:14:32.223552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:14:32.223597 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:14:32.227209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:14:32.264363 kernel: raid6: avx512x4 gen() 41358 MB/s Mar 14 00:14:32.282367 kernel: raid6: avx512x2 gen() 44904 MB/s Mar 14 00:14:32.300405 kernel: raid6: avx512x1 gen() 41793 MB/s Mar 14 00:14:32.318407 kernel: raid6: avx2x4 gen() 40026 MB/s Mar 14 00:14:32.336398 kernel: raid6: avx2x2 gen() 45767 MB/s Mar 14 00:14:32.355481 kernel: raid6: avx2x1 gen() 36297 MB/s Mar 14 00:14:32.355530 kernel: raid6: using algorithm avx2x2 gen() 45767 MB/s Mar 14 00:14:32.375565 kernel: raid6: .... xor() 36173 MB/s, rmw enabled Mar 14 00:14:32.375634 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:14:32.392408 kernel: xor: automatically using best checksumming function avx Mar 14 00:14:32.523391 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:14:32.537862 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:14:32.543638 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:32.554483 systemd-udevd[409]: Using default interface naming scheme 'v255'. Mar 14 00:14:32.558371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:32.567605 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:14:32.578759 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 14 00:14:32.615203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:14:32.620440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:14:32.693078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:14:32.702620 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:14:32.733481 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:14:32.734295 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:14:32.734965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:14:32.735630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:14:32.742782 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:14:32.751775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:14:32.789915 kernel: ACPI: bus type USB registered Mar 14 00:14:32.801339 kernel: usbcore: registered new interface driver usbfs Mar 14 00:14:32.804808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:14:32.810879 kernel: usbcore: registered new interface driver hub Mar 14 00:14:32.810924 kernel: usbcore: registered new device driver usb Mar 14 00:14:32.810952 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:14:32.810984 kernel: libata version 3.00 loaded. Mar 14 00:14:32.804892 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:32.807386 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:14:32.813133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:14:32.813249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:32.813630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:32.820338 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:14:32.820521 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:14:32.825075 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:14:32.825282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:32.832258 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:14:32.832597 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:14:32.837298 kernel: scsi host1: ahci Mar 14 00:14:32.838449 kernel: scsi host2: ahci Mar 14 00:14:32.842339 kernel: scsi host3: ahci Mar 14 00:14:32.847353 kernel: scsi host4: ahci Mar 14 00:14:32.848829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:32.857695 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:14:32.858088 kernel: scsi host5: ahci Mar 14 00:14:32.859596 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:14:32.865218 kernel: scsi host6: ahci Mar 14 00:14:32.865565 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Mar 14 00:14:32.865576 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Mar 14 00:14:32.870056 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Mar 14 00:14:32.870077 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Mar 14 00:14:32.872540 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Mar 14 00:14:32.876015 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Mar 14 00:14:32.876037 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:14:32.879109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:14:32.882450 kernel: AES CTR mode by8 optimization enabled Mar 14 00:14:32.879221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:32.879788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:14:32.879833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:32.880161 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:32.886586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:32.909930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:32.916455 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:14:32.930802 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:33.186514 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:14:33.198912 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:14:33.198985 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:14:33.199008 kernel: ata1.00: applying bridge limits Mar 14 00:14:33.204398 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:14:33.204452 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:14:33.209392 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:14:33.214404 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:14:33.222535 kernel: ata1.00: configured for UDMA/100 Mar 14 00:14:33.229388 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:14:33.250530 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:14:33.254904 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:14:33.255251 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:14:33.275201 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:14:33.275571 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:14:33.280468 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:14:33.284207 kernel: hub 1-0:1.0: USB hub found Mar 14 00:14:33.284395 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:14:33.288611 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:14:33.288784 kernel: hub 2-0:1.0: USB hub found Mar 14 00:14:33.290265 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:14:33.297977 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:14:33.298170 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:14:33.303120 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 14 00:14:33.307128 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Mar 14 00:14:33.307287 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:14:33.307440 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 14 00:14:33.307564 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:14:33.311341 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:14:33.315434 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:14:33.315452 kernel: GPT:17805311 != 160006143 Mar 14 00:14:33.318121 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:14:33.318137 kernel: GPT:17805311 != 160006143 Mar 14 00:14:33.319587 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:14:33.322418 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:14:33.324216 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:14:33.360361 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (470) Mar 14 00:14:33.361938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:14:33.366478 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:14:33.368448 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (471) Mar 14 00:14:33.377669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:14:33.381172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:14:33.382723 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:14:33.392528 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:14:33.398149 disk-uuid[591]: Primary Header is updated. Mar 14 00:14:33.398149 disk-uuid[591]: Secondary Entries is updated. Mar 14 00:14:33.398149 disk-uuid[591]: Secondary Header is updated. Mar 14 00:14:33.404372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:14:33.410338 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:14:33.523735 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:14:33.662400 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:14:33.667374 kernel: usbcore: registered new interface driver usbhid Mar 14 00:14:33.667422 kernel: usbhid: USB HID core driver Mar 14 00:14:33.676268 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 14 00:14:33.676313 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:14:34.416539 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:14:34.419781 disk-uuid[592]: The operation has completed successfully. Mar 14 00:14:34.500137 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:14:34.500361 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:14:34.513585 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:14:34.519611 sh[608]: Success Mar 14 00:14:34.534376 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:14:34.594098 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:14:34.597406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:14:34.598576 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:14:34.614088 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:14:34.614165 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:14:34.614191 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:14:34.618685 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:14:34.618721 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:14:34.630377 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:14:34.631688 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:14:34.633085 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:14:34.639518 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:14:34.643469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:14:34.668237 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:14:34.668294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:14:34.668303 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:14:34.673403 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:14:34.673452 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:14:34.684276 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:14:34.688351 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:14:34.694465 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:14:34.701438 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:14:34.756631 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:14:34.767531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:14:34.771091 ignition[734]: Ignition 2.19.0 Mar 14 00:14:34.771099 ignition[734]: Stage: fetch-offline Mar 14 00:14:34.771148 ignition[734]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:34.771160 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:34.773786 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:14:34.771404 ignition[734]: parsed url from cmdline: "" Mar 14 00:14:34.771409 ignition[734]: no config URL provided Mar 14 00:14:34.771416 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:14:34.771427 ignition[734]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:14:34.771434 ignition[734]: failed to fetch config: resource requires networking Mar 14 00:14:34.771599 ignition[734]: Ignition finished successfully Mar 14 00:14:34.789252 systemd-networkd[793]: lo: Link UP Mar 14 00:14:34.789782 systemd-networkd[793]: lo: Gained carrier Mar 14 00:14:34.792182 systemd-networkd[793]: Enumeration completed Mar 14 00:14:34.792396 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:14:34.793041 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:34.793046 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:34.793654 systemd[1]: Reached target network.target - Network. Mar 14 00:14:34.795154 systemd-networkd[793]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:34.795158 systemd-networkd[793]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:34.795825 systemd-networkd[793]: eth0: Link UP Mar 14 00:14:34.795829 systemd-networkd[793]: eth0: Gained carrier Mar 14 00:14:34.795835 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:34.801535 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:14:34.803937 systemd-networkd[793]: eth1: Link UP Mar 14 00:14:34.803943 systemd-networkd[793]: eth1: Gained carrier Mar 14 00:14:34.803956 systemd-networkd[793]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:34.814916 ignition[796]: Ignition 2.19.0 Mar 14 00:14:34.814925 ignition[796]: Stage: fetch Mar 14 00:14:34.815106 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:34.815121 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:34.815204 ignition[796]: parsed url from cmdline: "" Mar 14 00:14:34.815208 ignition[796]: no config URL provided Mar 14 00:14:34.815212 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:14:34.815220 ignition[796]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:14:34.815240 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:14:34.815407 ignition[796]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:14:34.831380 systemd-networkd[793]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:14:34.868382 systemd-networkd[793]: eth0: DHCPv4 address 204.168.141.220/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:14:35.015905 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:14:35.023248 ignition[796]: GET result: OK Mar 14 00:14:35.023447 ignition[796]: parsing config with SHA512: 008077fba1685454bc72a5c4d3466502a8dfc74bc77d0b2d012d87dd4ebd81bd9d0adaa14999910b444053836cbb07f2f9fd35b64a0415de97b225f9f13041f5 Mar 14 00:14:35.030229 unknown[796]: fetched base config from "system" Mar 14 00:14:35.030253 unknown[796]: fetched base config from "system" Mar 14 00:14:35.031027 ignition[796]: fetch: fetch complete Mar 14 00:14:35.030265 unknown[796]: fetched user config from "hetzner" Mar 14 00:14:35.031041 ignition[796]: fetch: fetch passed Mar 14 00:14:35.031132 ignition[796]: Ignition finished successfully Mar 14 00:14:35.037532 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:14:35.043604 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:14:35.080634 ignition[804]: Ignition 2.19.0 Mar 14 00:14:35.080659 ignition[804]: Stage: kargs Mar 14 00:14:35.080987 ignition[804]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:35.081010 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:35.083803 ignition[804]: kargs: kargs passed Mar 14 00:14:35.083889 ignition[804]: Ignition finished successfully Mar 14 00:14:35.087395 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:14:35.096596 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:14:35.129586 ignition[810]: Ignition 2.19.0 Mar 14 00:14:35.129606 ignition[810]: Stage: disks Mar 14 00:14:35.129913 ignition[810]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:35.129936 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:35.131299 ignition[810]: disks: disks passed Mar 14 00:14:35.132438 ignition[810]: Ignition finished successfully Mar 14 00:14:35.138141 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:14:35.139700 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:14:35.140901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:14:35.142168 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:14:35.144196 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:14:35.146231 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:14:35.158580 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:14:35.180018 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:14:35.184510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:14:35.192507 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:14:35.276415 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:14:35.276605 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:14:35.277502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:14:35.283395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:14:35.293737 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:14:35.297782 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:14:35.298588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:14:35.298616 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:14:35.303766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:14:35.314378 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (827) Mar 14 00:14:35.317467 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:14:35.317502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:14:35.317699 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:14:35.323719 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:14:35.333623 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:14:35.333655 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:14:35.339752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:14:35.363830 coreos-metadata[829]: Mar 14 00:14:35.363 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:14:35.364970 coreos-metadata[829]: Mar 14 00:14:35.364 INFO Fetch successful Mar 14 00:14:35.366269 coreos-metadata[829]: Mar 14 00:14:35.366 INFO wrote hostname ci-4081-3-6-n-e97f419eb8 to /sysroot/etc/hostname Mar 14 00:14:35.368083 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:14:35.374391 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:14:35.379258 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:14:35.383385 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:14:35.387178 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:14:35.471879 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:14:35.476427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:14:35.479468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:14:35.488385 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:14:35.511089 ignition[943]: INFO : Ignition 2.19.0 Mar 14 00:14:35.511089 ignition[943]: INFO : Stage: mount Mar 14 00:14:35.511089 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:35.511089 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:35.511089 ignition[943]: INFO : mount: mount passed Mar 14 00:14:35.511089 ignition[943]: INFO : Ignition finished successfully Mar 14 00:14:35.510375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:14:35.514769 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:14:35.524447 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:14:35.610345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:14:35.615444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:14:35.641410 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (955) Mar 14 00:14:35.650866 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:14:35.650934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:14:35.656406 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:14:35.665365 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:14:35.665415 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:14:35.669580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:14:35.700560 ignition[971]: INFO : Ignition 2.19.0 Mar 14 00:14:35.700560 ignition[971]: INFO : Stage: files Mar 14 00:14:35.702696 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:35.702696 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:35.702696 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:14:35.702696 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:14:35.702696 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:14:35.706855 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:14:35.706855 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:14:35.706855 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:14:35.706303 unknown[971]: wrote ssh authorized keys file for user: core Mar 14 00:14:35.710396 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:14:35.710396 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:14:35.909533 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:14:36.142879 systemd-networkd[793]: eth0: Gained IPv6LL Mar 14 00:14:36.221283 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:14:36.221283 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:14:36.221283 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:14:36.654625 systemd-networkd[793]: eth1: Gained IPv6LL Mar 14 00:14:36.688300 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:14:37.095272 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:14:37.095272 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:14:37.097825 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:14:37.476086 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:14:37.725676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:14:37.725676 ignition[971]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:14:37.730621 ignition[971]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:14:37.730621 ignition[971]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:14:37.730621 ignition[971]: INFO : files: files passed Mar 14 00:14:37.730621 ignition[971]: INFO : Ignition finished successfully Mar 14 00:14:37.730126 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:14:37.735507 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:14:37.741486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:14:37.742633 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:14:37.742719 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:14:37.756302 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:37.756302 initrd-setup-root-after-ignition[1001]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:37.757946 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:37.759720 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:14:37.760585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:14:37.765445 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:14:37.784309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:14:37.784442 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:14:37.785682 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:14:37.786503 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:14:37.787116 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:14:37.793506 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:14:37.804511 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:14:37.809483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:14:37.817676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:37.818168 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:14:37.818624 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:14:37.819025 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:14:37.819111 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:14:37.819968 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:14:37.820731 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:14:37.821425 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:14:37.822141 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:14:37.822821 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:14:37.823521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:14:37.824189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:14:37.824893 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:14:37.825592 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:14:37.826287 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:14:37.826996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:14:37.827075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:14:37.828097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:37.828811 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:14:37.829484 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:14:37.829585 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:14:37.830229 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:14:37.830343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:14:37.831246 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:14:37.831361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:14:37.831977 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:14:37.832053 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:14:37.832665 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:14:37.832749 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:14:37.837507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:14:37.838408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:14:37.839212 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:14:37.839654 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:14:37.842460 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:14:37.842900 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:14:37.846821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:14:37.847315 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:14:37.850660 ignition[1025]: INFO : Ignition 2.19.0 Mar 14 00:14:37.850660 ignition[1025]: INFO : Stage: umount Mar 14 00:14:37.855632 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:37.855632 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:14:37.855632 ignition[1025]: INFO : umount: umount passed Mar 14 00:14:37.855632 ignition[1025]: INFO : Ignition finished successfully Mar 14 00:14:37.853655 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:14:37.853754 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:14:37.854547 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:14:37.854616 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:14:37.856990 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:14:37.857032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:14:37.857487 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:14:37.857523 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:14:37.859414 systemd[1]: Stopped target network.target - Network. Mar 14 00:14:37.859984 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:14:37.860025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:14:37.860429 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:14:37.860733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:14:37.864394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:14:37.864720 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:14:37.865032 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:14:37.865378 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:14:37.865423 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:14:37.865739 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:14:37.865775 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:14:37.866074 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:14:37.866112 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:14:37.868408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:14:37.868455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:14:37.869139 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:14:37.869536 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:14:37.872822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:14:37.882134 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:14:37.882248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:14:37.884571 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:14:37.884898 systemd-networkd[793]: eth1: DHCPv6 lease lost Mar 14 00:14:37.885411 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:37.889614 systemd-networkd[793]: eth0: DHCPv6 lease lost Mar 14 00:14:37.891284 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:14:37.891419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:14:37.892158 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:14:37.892192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:14:37.899427 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:14:37.899741 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:14:37.899788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:14:37.900126 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:14:37.900161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:37.900505 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:14:37.900539 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:14:37.900998 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:37.914943 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:14:37.915052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:14:37.915973 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:14:37.916103 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:37.917020 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:14:37.917110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:14:37.918786 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:14:37.918838 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:14:37.919575 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:14:37.919606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:14:37.920151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:14:37.920187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:14:37.921165 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:14:37.921201 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:14:37.922130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:14:37.922169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:37.923106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:14:37.923141 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:14:37.934458 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:14:37.935239 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:14:37.935295 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:14:37.935834 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:14:37.935880 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:14:37.936258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:14:37.936294 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:14:37.936705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:14:37.936744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:37.940743 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:14:37.941251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:14:37.941881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:14:37.946442 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:14:37.952232 systemd[1]: Switching root. Mar 14 00:14:37.983364 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Mar 14 00:14:37.983436 systemd-journald[187]: Journal stopped Mar 14 00:14:39.031690 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:14:39.031760 kernel: SELinux: policy capability open_perms=1 Mar 14 00:14:39.031776 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:14:39.031789 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:14:39.031798 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:14:39.031806 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:14:39.031817 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:14:39.031830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:14:39.031838 kernel: audit: type=1403 audit(1773447278.127:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:14:39.031853 systemd[1]: Successfully loaded SELinux policy in 45.673ms. Mar 14 00:14:39.031873 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.096ms. Mar 14 00:14:39.031882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:14:39.031892 systemd[1]: Detected virtualization kvm. Mar 14 00:14:39.031901 systemd[1]: Detected architecture x86-64. Mar 14 00:14:39.031911 systemd[1]: Detected first boot. Mar 14 00:14:39.031922 systemd[1]: Hostname set to . Mar 14 00:14:39.031931 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:14:39.031940 zram_generator::config[1069]: No configuration found. Mar 14 00:14:39.031950 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:14:39.031959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:14:39.031968 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:14:39.031976 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:39.031986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:14:39.031997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:14:39.032006 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:14:39.032014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:14:39.032025 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:14:39.032034 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:14:39.032043 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:14:39.032051 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:14:39.032060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:14:39.032072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:14:39.032081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:14:39.032090 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:14:39.032098 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:14:39.032107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:14:39.032116 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:14:39.032130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:14:39.032143 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:14:39.032152 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:14:39.032163 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:14:39.032172 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:14:39.032181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:14:39.032192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:14:39.032202 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:14:39.032210 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:14:39.032222 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:14:39.032234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:14:39.032243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:14:39.032252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:14:39.032260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:14:39.032269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:14:39.032278 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:14:39.032287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:14:39.032295 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:14:39.032305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:39.032316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:14:39.034357 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:14:39.034369 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:14:39.034379 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:14:39.034389 systemd[1]: Reached target machines.target - Containers. Mar 14 00:14:39.034398 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:14:39.034406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:39.034416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:14:39.034429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:14:39.034438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:39.034446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:14:39.034456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:39.034465 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:14:39.034474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:39.034483 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:14:39.034494 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:14:39.034503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:14:39.034512 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:14:39.034521 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:14:39.034529 kernel: fuse: init (API version 7.39) Mar 14 00:14:39.034538 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:14:39.034547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:14:39.034558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:14:39.034567 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:14:39.034579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:14:39.034588 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:14:39.034597 systemd[1]: Stopped verity-setup.service. Mar 14 00:14:39.034605 kernel: ACPI: bus type drm_connector registered Mar 14 00:14:39.034614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:39.034623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:14:39.034632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:14:39.034643 kernel: loop: module loaded Mar 14 00:14:39.034655 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:14:39.034664 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:14:39.034675 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:14:39.034684 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:14:39.034710 systemd-journald[1149]: Collecting audit messages is disabled. Mar 14 00:14:39.034728 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:14:39.034738 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:14:39.034747 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:14:39.034756 systemd-journald[1149]: Journal started Mar 14 00:14:39.034773 systemd-journald[1149]: Runtime Journal (/run/log/journal/d543067c4661475b97c4cc6ab451550b) is 8.0M, max 76.3M, 68.3M free. Mar 14 00:14:38.694006 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:14:38.715402 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:14:38.715839 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:14:39.035441 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:14:39.040657 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:14:39.040488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:39.040733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:39.041731 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:14:39.041924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:14:39.042702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:39.042887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:39.043610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:14:39.043785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:14:39.044447 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:39.044626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:39.045260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:14:39.045953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:14:39.046728 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:14:39.058296 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:14:39.065150 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:14:39.070552 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:14:39.071039 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:14:39.071123 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:14:39.072452 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:14:39.077468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:14:39.082698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:14:39.083564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:39.085496 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:14:39.104992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:14:39.105631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:39.107618 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:14:39.107993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:39.112825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:14:39.116427 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:14:39.122524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:14:39.125638 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:14:39.126212 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:14:39.127497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:14:39.132416 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:14:39.135189 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:14:39.141841 systemd-journald[1149]: Time spent on flushing to /var/log/journal/d543067c4661475b97c4cc6ab451550b is 86.909ms for 1191 entries. Mar 14 00:14:39.141841 systemd-journald[1149]: System Journal (/var/log/journal/d543067c4661475b97c4cc6ab451550b) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:14:39.257518 systemd-journald[1149]: Received client request to flush runtime journal. Mar 14 00:14:39.257550 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:14:39.257575 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:14:39.257585 kernel: loop1: detected capacity change from 0 to 8 Mar 14 00:14:39.257598 kernel: loop2: detected capacity change from 0 to 228704 Mar 14 00:14:39.143780 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:14:39.207557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:39.219254 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:14:39.221024 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:14:39.246272 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 14 00:14:39.246491 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 14 00:14:39.260971 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:14:39.263597 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:14:39.273460 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:14:39.274937 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:14:39.287009 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:14:39.304471 kernel: loop3: detected capacity change from 0 to 142488 Mar 14 00:14:39.307943 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:14:39.324636 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:14:39.336511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:14:39.358347 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:14:39.356629 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Mar 14 00:14:39.356641 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Mar 14 00:14:39.363869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:14:39.375362 kernel: loop5: detected capacity change from 0 to 8 Mar 14 00:14:39.380348 kernel: loop6: detected capacity change from 0 to 228704 Mar 14 00:14:39.398357 kernel: loop7: detected capacity change from 0 to 142488 Mar 14 00:14:39.421589 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:14:39.422165 (sd-merge)[1217]: Merged extensions into '/usr'. Mar 14 00:14:39.426970 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:14:39.427073 systemd[1]: Reloading... Mar 14 00:14:39.496365 zram_generator::config[1244]: No configuration found. Mar 14 00:14:39.605869 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:14:39.621291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:39.669546 systemd[1]: Reloading finished in 241 ms. Mar 14 00:14:39.695298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:14:39.696259 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:14:39.697052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:14:39.708567 systemd[1]: Starting ensure-sysext.service... Mar 14 00:14:39.710061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:14:39.712669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:39.719420 systemd[1]: Reloading requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:14:39.719431 systemd[1]: Reloading... Mar 14 00:14:39.730996 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:14:39.731276 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:14:39.732372 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:14:39.732648 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Mar 14 00:14:39.732754 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Mar 14 00:14:39.735781 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:39.735791 systemd-tmpfiles[1289]: Skipping /boot Mar 14 00:14:39.745099 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:39.745155 systemd-udevd[1290]: Using default interface naming scheme 'v255'. Mar 14 00:14:39.745160 systemd-tmpfiles[1289]: Skipping /boot Mar 14 00:14:39.807395 zram_generator::config[1330]: No configuration found. Mar 14 00:14:39.918371 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:14:39.921344 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:14:39.933349 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:14:39.956310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:40.024500 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:14:40.024666 systemd[1]: Reloading finished in 304 ms. Mar 14 00:14:40.026354 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 14 00:14:40.031480 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:14:40.052207 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 14 00:14:40.052227 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1332) Mar 14 00:14:40.052242 kernel: Console: switching to colour dummy device 80x25 Mar 14 00:14:40.052255 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 14 00:14:40.066902 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:14:40.066917 kernel: [drm] features: -context_init Mar 14 00:14:40.066927 kernel: [drm] number of scanouts: 1 Mar 14 00:14:40.066943 kernel: [drm] number of cap sets: 0 Mar 14 00:14:40.066952 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:14:40.093952 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:14:40.094080 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:14:40.094091 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:14:40.097389 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:14:40.117574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:40.120196 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 14 00:14:40.120240 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:14:40.125357 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:14:40.131951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:40.153681 systemd[1]: Finished ensure-sysext.service. Mar 14 00:14:40.156271 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:14:40.173070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:14:40.174258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:40.179453 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:40.182454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:14:40.184442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:40.185482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:40.190681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:14:40.193476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:40.196305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:40.197882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:40.199847 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:14:40.201823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:14:40.208464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:14:40.213257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:14:40.216123 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:14:40.220450 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:14:40.227490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:40.228250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:40.229685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:40.229835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:40.230180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:40.230310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:40.235089 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:40.247501 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:14:40.249120 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:14:40.250187 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:40.250380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:40.253866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:40.258478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:14:40.268741 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:14:40.268938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:14:40.275491 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:14:40.297922 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:14:40.304260 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:14:40.317592 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:14:40.319495 augenrules[1446]: No rules Mar 14 00:14:40.325593 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:40.326290 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:14:40.338578 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:40.339930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:14:40.349194 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:14:40.352394 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:14:40.381113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:14:40.382975 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:40.390469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:14:40.403962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:40.419343 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:40.446229 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:14:40.449848 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:14:40.451107 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:14:40.468572 systemd-resolved[1420]: Positive Trust Anchors: Mar 14 00:14:40.468622 systemd-networkd[1419]: lo: Link UP Mar 14 00:14:40.468628 systemd-networkd[1419]: lo: Gained carrier Mar 14 00:14:40.469091 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:14:40.469157 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:14:40.472032 systemd-networkd[1419]: Enumeration completed Mar 14 00:14:40.472434 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:14:40.473373 systemd-resolved[1420]: Using system hostname 'ci-4081-3-6-n-e97f419eb8'. Mar 14 00:14:40.474217 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:40.474229 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:40.476019 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:40.476077 systemd-networkd[1419]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:40.477232 systemd-networkd[1419]: eth0: Link UP Mar 14 00:14:40.477372 systemd-networkd[1419]: eth0: Gained carrier Mar 14 00:14:40.477435 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:40.480461 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:14:40.480619 systemd-networkd[1419]: eth1: Link UP Mar 14 00:14:40.480670 systemd-networkd[1419]: eth1: Gained carrier Mar 14 00:14:40.480707 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:40.482347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:14:40.482802 systemd[1]: Reached target network.target - Network. Mar 14 00:14:40.483122 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:40.484962 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:14:40.485639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:14:40.486380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:14:40.486897 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:14:40.487301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:14:40.488240 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:14:40.488605 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:14:40.488629 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:14:40.488938 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:14:40.490747 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:14:40.492621 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:14:40.498454 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:14:40.500478 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:14:40.501218 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:14:40.503828 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:14:40.504396 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:40.504432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:40.506011 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:14:40.509385 systemd-networkd[1419]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:14:40.511212 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:40.516516 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:14:40.520816 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:14:40.523334 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:14:40.533578 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:14:40.535469 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:14:40.537302 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:14:40.540072 jq[1477]: false Mar 14 00:14:40.541128 coreos-metadata[1473]: Mar 14 00:14:40.540 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:14:40.541431 coreos-metadata[1473]: Mar 14 00:14:40.541 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Mar 14 00:14:40.543437 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:14:40.547407 systemd-networkd[1419]: eth0: DHCPv4 address 204.168.141.220/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:14:40.547776 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:40.551647 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:40.555630 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:14:40.565616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:14:40.568760 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:14:40.580793 dbus-daemon[1474]: [system] SELinux support is enabled Mar 14 00:14:40.585309 extend-filesystems[1478]: Found loop4 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found loop5 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found loop6 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found loop7 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda1 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda2 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda3 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found usr Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda4 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda6 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda7 Mar 14 00:14:40.585309 extend-filesystems[1478]: Found sda9 Mar 14 00:14:40.585309 extend-filesystems[1478]: Checking size of /dev/sda9 Mar 14 00:14:40.583608 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:14:40.623297 extend-filesystems[1478]: Resized partition /dev/sda9 Mar 14 00:14:40.588247 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:14:40.623792 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:14:40.633297 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Mar 14 00:14:40.588829 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:14:40.590274 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:14:40.604527 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:14:40.618811 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:14:40.633792 jq[1488]: true Mar 14 00:14:40.634886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:14:40.635074 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:14:40.637686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:14:40.637845 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:14:40.655296 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:14:40.655378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:14:40.657889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:14:40.657921 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:14:40.661445 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:14:40.662475 update_engine[1487]: I20260314 00:14:40.662379 1487 main.cc:92] Flatcar Update Engine starting Mar 14 00:14:40.664933 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:14:40.677711 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:14:40.680098 jq[1505]: true Mar 14 00:14:40.682723 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:14:40.687768 update_engine[1487]: I20260314 00:14:40.683907 1487 update_check_scheduler.cc:74] Next update check in 10m50s Mar 14 00:14:40.691229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:14:40.712435 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1338) Mar 14 00:14:40.719830 tar[1503]: linux-amd64/LICENSE Mar 14 00:14:40.719830 tar[1503]: linux-amd64/helm Mar 14 00:14:40.742540 systemd-logind[1485]: New seat seat0. Mar 14 00:14:40.745016 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 00:14:40.745043 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:14:40.745701 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:14:40.840445 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:40.844007 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:14:40.861626 systemd[1]: Starting sshkeys.service... Mar 14 00:14:40.894040 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:14:40.902735 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:14:40.916019 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:14:40.933152 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:14:40.943354 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Mar 14 00:14:40.948960 coreos-metadata[1546]: Mar 14 00:14:40.948 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:14:40.959613 coreos-metadata[1546]: Mar 14 00:14:40.950 INFO Fetch successful Mar 14 00:14:40.960285 containerd[1515]: time="2026-03-14T00:14:40.960204560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:14:40.962373 unknown[1546]: wrote ssh authorized keys file for user: core Mar 14 00:14:40.962646 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:14:40.962646 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 14 00:14:40.962646 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Mar 14 00:14:40.963855 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:14:40.971537 extend-filesystems[1478]: Resized filesystem in /dev/sda9 Mar 14 00:14:40.971537 extend-filesystems[1478]: Found sr0 Mar 14 00:14:40.964045 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.977340728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.978821100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.978839538Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.978852006Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.978998126Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979008271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979061050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979069673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979214410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979224636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.980228 containerd[1515]: time="2026-03-14T00:14:40.979233960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979240730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979301962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979507791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979592257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979606319Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979680390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:14:40.981091 containerd[1515]: time="2026-03-14T00:14:40.979716414Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:14:40.984465 containerd[1515]: time="2026-03-14T00:14:40.984350319Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:14:40.984465 containerd[1515]: time="2026-03-14T00:14:40.984383498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:14:40.984465 containerd[1515]: time="2026-03-14T00:14:40.984395306Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:14:40.984465 containerd[1515]: time="2026-03-14T00:14:40.984411210Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:14:40.984465 containerd[1515]: time="2026-03-14T00:14:40.984423028Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:14:40.984545 containerd[1515]: time="2026-03-14T00:14:40.984538341Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:14:40.984786 containerd[1515]: time="2026-03-14T00:14:40.984698070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:14:40.984802 containerd[1515]: time="2026-03-14T00:14:40.984786663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:14:40.984802 containerd[1515]: time="2026-03-14T00:14:40.984797069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:14:40.984827 containerd[1515]: time="2026-03-14T00:14:40.984806123Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:14:40.984827 containerd[1515]: time="2026-03-14T00:14:40.984816378Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984851 containerd[1515]: time="2026-03-14T00:14:40.984826103Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984851 containerd[1515]: time="2026-03-14T00:14:40.984834445Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984851 containerd[1515]: time="2026-03-14T00:14:40.984843969Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984891 containerd[1515]: time="2026-03-14T00:14:40.984853704Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984891 containerd[1515]: time="2026-03-14T00:14:40.984863919Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984891 containerd[1515]: time="2026-03-14T00:14:40.984872282Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984891 containerd[1515]: time="2026-03-14T00:14:40.984880785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:14:40.984940 containerd[1515]: time="2026-03-14T00:14:40.984905131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.984940 containerd[1515]: time="2026-03-14T00:14:40.984914756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.984940 containerd[1515]: time="2026-03-14T00:14:40.984923819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.984940 containerd[1515]: time="2026-03-14T00:14:40.984932663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984941476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984951160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984959663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984969528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984982738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.984993023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.985001155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.985011911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985023 containerd[1515]: time="2026-03-14T00:14:40.985024430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985035006Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985048236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985056358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985063509Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985107335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985119092Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:14:40.985126 containerd[1515]: time="2026-03-14T00:14:40.985126483Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:14:40.985209 containerd[1515]: time="2026-03-14T00:14:40.985135086Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:14:40.985209 containerd[1515]: time="2026-03-14T00:14:40.985142267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.985209 containerd[1515]: time="2026-03-14T00:14:40.985150740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:14:40.985209 containerd[1515]: time="2026-03-14T00:14:40.985158121Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:14:40.985209 containerd[1515]: time="2026-03-14T00:14:40.985165182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:14:40.987803 containerd[1515]: time="2026-03-14T00:14:40.987295268Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:14:40.987803 containerd[1515]: time="2026-03-14T00:14:40.987378373Z" level=info msg="Connect containerd service" Mar 14 00:14:40.987803 containerd[1515]: time="2026-03-14T00:14:40.987412424Z" level=info msg="using legacy CRI server" Mar 14 00:14:40.987803 containerd[1515]: time="2026-03-14T00:14:40.987418213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:14:40.987803 containerd[1515]: time="2026-03-14T00:14:40.987498323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:14:40.988166 containerd[1515]: time="2026-03-14T00:14:40.987997603Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:14:40.989067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:14:40.992707 containerd[1515]: time="2026-03-14T00:14:40.992660090Z" level=info msg="Start subscribing containerd event" Mar 14 00:14:40.992763 containerd[1515]: time="2026-03-14T00:14:40.992724006Z" level=info msg="Start recovering state" Mar 14 00:14:40.993112 containerd[1515]: time="2026-03-14T00:14:40.992792629Z" level=info msg="Start event monitor" Mar 14 00:14:40.993112 containerd[1515]: time="2026-03-14T00:14:40.992807762Z" level=info msg="Start snapshots syncer" Mar 14 00:14:40.993112 containerd[1515]: time="2026-03-14T00:14:40.992815794Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:14:40.993112 containerd[1515]: time="2026-03-14T00:14:40.992824527Z" level=info msg="Start streaming server" Mar 14 00:14:40.994514 containerd[1515]: time="2026-03-14T00:14:40.993480662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:14:40.994514 containerd[1515]: time="2026-03-14T00:14:40.993529505Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:14:40.994514 containerd[1515]: time="2026-03-14T00:14:40.993567602Z" level=info msg="containerd successfully booted in 0.036535s" Mar 14 00:14:40.995552 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:14:40.996343 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:14:40.998346 update-ssh-keys[1558]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:41.000184 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:14:41.008925 systemd[1]: Finished sshkeys.service. Mar 14 00:14:41.021910 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:14:41.022148 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:14:41.033000 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:14:41.043398 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:14:41.051890 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:14:41.060716 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:14:41.062733 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:14:41.254052 tar[1503]: linux-amd64/README.md Mar 14 00:14:41.264063 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:14:41.541460 coreos-metadata[1473]: Mar 14 00:14:41.541 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Mar 14 00:14:41.542454 coreos-metadata[1473]: Mar 14 00:14:41.542 INFO Fetch successful Mar 14 00:14:41.542566 coreos-metadata[1473]: Mar 14 00:14:41.542 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:14:41.542921 coreos-metadata[1473]: Mar 14 00:14:41.542 INFO Fetch successful Mar 14 00:14:41.582513 systemd-networkd[1419]: eth0: Gained IPv6LL Mar 14 00:14:41.583413 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:41.592044 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:14:41.596796 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:14:41.606632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:41.621477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:14:41.666700 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:14:41.670070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:14:41.670678 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:14:42.030887 systemd-networkd[1419]: eth1: Gained IPv6LL Mar 14 00:14:42.031831 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:42.394625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:42.398118 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:14:42.402370 systemd[1]: Startup finished in 1.576s (kernel) + 6.382s (initrd) + 4.319s (userspace) = 12.279s. Mar 14 00:14:42.403626 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:42.950353 kubelet[1603]: E0314 00:14:42.950218 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:42.955910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:42.956096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:46.016930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:46.028794 systemd[1]: Started sshd@0-204.168.141.220:22-68.220.241.50:55886.service - OpenSSH per-connection server daemon (68.220.241.50:55886). Mar 14 00:14:46.766763 sshd[1616]: Accepted publickey for core from 68.220.241.50 port 55886 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:46.770377 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:46.784338 systemd-logind[1485]: New session 1 of user core. Mar 14 00:14:46.787240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:46.794044 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:46.820438 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:46.828857 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:46.841193 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:46.958164 systemd[1620]: Queued start job for default target default.target. Mar 14 00:14:46.962608 systemd[1620]: Created slice app.slice - User Application Slice. Mar 14 00:14:46.962633 systemd[1620]: Reached target paths.target - Paths. Mar 14 00:14:46.962644 systemd[1620]: Reached target timers.target - Timers. Mar 14 00:14:46.963970 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:46.975009 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:46.975122 systemd[1620]: Reached target sockets.target - Sockets. Mar 14 00:14:46.975139 systemd[1620]: Reached target basic.target - Basic System. Mar 14 00:14:46.975172 systemd[1620]: Reached target default.target - Main User Target. Mar 14 00:14:46.975203 systemd[1620]: Startup finished in 124ms. Mar 14 00:14:46.975332 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:46.980424 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:47.513685 systemd[1]: Started sshd@1-204.168.141.220:22-68.220.241.50:55900.service - OpenSSH per-connection server daemon (68.220.241.50:55900). Mar 14 00:14:48.273219 sshd[1631]: Accepted publickey for core from 68.220.241.50 port 55900 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:48.276840 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:48.285456 systemd-logind[1485]: New session 2 of user core. Mar 14 00:14:48.296615 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:48.802080 sshd[1631]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:48.809867 systemd[1]: sshd@1-204.168.141.220:22-68.220.241.50:55900.service: Deactivated successfully. Mar 14 00:14:48.814597 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:48.815820 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:48.817737 systemd-logind[1485]: Removed session 2. Mar 14 00:14:48.938835 systemd[1]: Started sshd@2-204.168.141.220:22-68.220.241.50:55904.service - OpenSSH per-connection server daemon (68.220.241.50:55904). Mar 14 00:14:49.696729 sshd[1638]: Accepted publickey for core from 68.220.241.50 port 55904 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:49.699933 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:49.708824 systemd-logind[1485]: New session 3 of user core. Mar 14 00:14:49.719665 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:50.214799 sshd[1638]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:50.219492 systemd[1]: sshd@2-204.168.141.220:22-68.220.241.50:55904.service: Deactivated successfully. Mar 14 00:14:50.223297 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:50.225564 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:50.227745 systemd-logind[1485]: Removed session 3. Mar 14 00:14:50.353761 systemd[1]: Started sshd@3-204.168.141.220:22-68.220.241.50:55914.service - OpenSSH per-connection server daemon (68.220.241.50:55914). Mar 14 00:14:51.099778 sshd[1645]: Accepted publickey for core from 68.220.241.50 port 55914 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:51.102591 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:51.110535 systemd-logind[1485]: New session 4 of user core. Mar 14 00:14:51.117618 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:51.627262 sshd[1645]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:51.634262 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:51.635712 systemd[1]: sshd@3-204.168.141.220:22-68.220.241.50:55914.service: Deactivated successfully. Mar 14 00:14:51.639216 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:51.640925 systemd-logind[1485]: Removed session 4. Mar 14 00:14:51.763771 systemd[1]: Started sshd@4-204.168.141.220:22-68.220.241.50:59914.service - OpenSSH per-connection server daemon (68.220.241.50:59914). Mar 14 00:14:52.526500 sshd[1652]: Accepted publickey for core from 68.220.241.50 port 59914 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:52.527784 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:52.533816 systemd-logind[1485]: New session 5 of user core. Mar 14 00:14:52.542912 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:52.942275 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:52.942663 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:52.961275 sudo[1655]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:53.083279 sshd[1652]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:53.088464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:53.089346 systemd[1]: sshd@4-204.168.141.220:22-68.220.241.50:59914.service: Deactivated successfully. Mar 14 00:14:53.091055 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:53.091960 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:53.096691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:53.098319 systemd-logind[1485]: Removed session 5. Mar 14 00:14:53.223904 systemd[1]: Started sshd@5-204.168.141.220:22-68.220.241.50:59922.service - OpenSSH per-connection server daemon (68.220.241.50:59922). Mar 14 00:14:53.268352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:53.273812 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:53.307633 kubelet[1670]: E0314 00:14:53.307462 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:53.313764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:53.314005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:53.963479 sshd[1663]: Accepted publickey for core from 68.220.241.50 port 59922 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:53.966276 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:53.975424 systemd-logind[1485]: New session 6 of user core. Mar 14 00:14:53.985566 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:54.377090 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:54.377881 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:54.385176 sudo[1680]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:54.398091 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:54.399013 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:54.424848 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:54.441146 auditctl[1683]: No rules Mar 14 00:14:54.443505 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:54.443963 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:54.450813 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:54.519590 augenrules[1701]: No rules Mar 14 00:14:54.522204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:54.523834 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:54.645531 sshd[1663]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:54.655164 systemd[1]: sshd@5-204.168.141.220:22-68.220.241.50:59922.service: Deactivated successfully. Mar 14 00:14:54.659479 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:54.660809 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:54.663285 systemd-logind[1485]: Removed session 6. Mar 14 00:14:54.784791 systemd[1]: Started sshd@6-204.168.141.220:22-68.220.241.50:59930.service - OpenSSH per-connection server daemon (68.220.241.50:59930). Mar 14 00:14:55.537932 sshd[1709]: Accepted publickey for core from 68.220.241.50 port 59930 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:14:55.540721 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:55.549413 systemd-logind[1485]: New session 7 of user core. Mar 14 00:14:55.555560 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:55.949855 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:55.950576 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:56.257508 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:56.268042 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:56.512872 dockerd[1728]: time="2026-03-14T00:14:56.512719482Z" level=info msg="Starting up" Mar 14 00:14:56.569500 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1518167895-merged.mount: Deactivated successfully. Mar 14 00:14:56.598874 dockerd[1728]: time="2026-03-14T00:14:56.598836549Z" level=info msg="Loading containers: start." Mar 14 00:14:56.691359 kernel: Initializing XFRM netlink socket Mar 14 00:14:56.718146 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Mar 14 00:14:56.761458 systemd-networkd[1419]: docker0: Link UP Mar 14 00:14:56.773684 dockerd[1728]: time="2026-03-14T00:14:56.773553540Z" level=info msg="Loading containers: done." Mar 14 00:14:56.791876 dockerd[1728]: time="2026-03-14T00:14:56.791821738Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:56.792019 dockerd[1728]: time="2026-03-14T00:14:56.791913105Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:56.792019 dockerd[1728]: time="2026-03-14T00:14:56.791997221Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:56.818401 dockerd[1728]: time="2026-03-14T00:14:56.818354499Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:56.818676 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:58.013029 systemd-timesyncd[1421]: Contacted time server 141.82.25.203:123 (2.flatcar.pool.ntp.org). Mar 14 00:14:58.013135 systemd-timesyncd[1421]: Initial clock synchronization to Sat 2026-03-14 00:14:58.012552 UTC. Mar 14 00:14:58.013695 systemd-resolved[1420]: Clock change detected. Flushing caches. Mar 14 00:14:58.206815 containerd[1515]: time="2026-03-14T00:14:58.206772656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:14:58.816380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274188802.mount: Deactivated successfully. Mar 14 00:14:59.943595 containerd[1515]: time="2026-03-14T00:14:59.943549829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.944474 containerd[1515]: time="2026-03-14T00:14:59.944372914Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116286" Mar 14 00:14:59.946271 containerd[1515]: time="2026-03-14T00:14:59.945040426Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.947816 containerd[1515]: time="2026-03-14T00:14:59.946896081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.947816 containerd[1515]: time="2026-03-14T00:14:59.947646807Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.740842074s" Mar 14 00:14:59.947816 containerd[1515]: time="2026-03-14T00:14:59.947670453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:14:59.948223 containerd[1515]: time="2026-03-14T00:14:59.948203102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:15:01.121446 containerd[1515]: time="2026-03-14T00:15:01.121397121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:01.122235 containerd[1515]: time="2026-03-14T00:15:01.122209620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021832" Mar 14 00:15:01.123557 containerd[1515]: time="2026-03-14T00:15:01.123539657Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:01.126971 containerd[1515]: time="2026-03-14T00:15:01.126953931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:01.127652 containerd[1515]: time="2026-03-14T00:15:01.127632880Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.179409727s" Mar 14 00:15:01.127736 containerd[1515]: time="2026-03-14T00:15:01.127720692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:15:01.128396 containerd[1515]: time="2026-03-14T00:15:01.128381173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:15:02.103037 containerd[1515]: time="2026-03-14T00:15:02.102985431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:02.103893 containerd[1515]: time="2026-03-14T00:15:02.103858362Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162768" Mar 14 00:15:02.105584 containerd[1515]: time="2026-03-14T00:15:02.105339715Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:02.107540 containerd[1515]: time="2026-03-14T00:15:02.107523282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:02.108220 containerd[1515]: time="2026-03-14T00:15:02.108202191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 979.800056ms" Mar 14 00:15:02.108279 containerd[1515]: time="2026-03-14T00:15:02.108268430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:15:02.108847 containerd[1515]: time="2026-03-14T00:15:02.108820719Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:15:03.097111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267472283.mount: Deactivated successfully. Mar 14 00:15:03.438566 containerd[1515]: time="2026-03-14T00:15:03.438502049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:03.440345 containerd[1515]: time="2026-03-14T00:15:03.440298866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828675" Mar 14 00:15:03.443088 containerd[1515]: time="2026-03-14T00:15:03.442253229Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:03.444707 containerd[1515]: time="2026-03-14T00:15:03.444101793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:03.444707 containerd[1515]: time="2026-03-14T00:15:03.444523767Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.335678681s" Mar 14 00:15:03.444707 containerd[1515]: time="2026-03-14T00:15:03.444548374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:15:03.445533 containerd[1515]: time="2026-03-14T00:15:03.445500132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:15:03.965372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28610741.mount: Deactivated successfully. Mar 14 00:15:04.438537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:15:04.448230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:04.587850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:04.588878 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:04.620898 kubelet[2001]: E0314 00:15:04.620807 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:04.624619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:04.624855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:04.806916 containerd[1515]: time="2026-03-14T00:15:04.806796932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.808872 containerd[1515]: time="2026-03-14T00:15:04.808638095Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Mar 14 00:15:04.810862 containerd[1515]: time="2026-03-14T00:15:04.809757134Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.812216 containerd[1515]: time="2026-03-14T00:15:04.812165419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.812949 containerd[1515]: time="2026-03-14T00:15:04.812915144Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.367381863s" Mar 14 00:15:04.812986 containerd[1515]: time="2026-03-14T00:15:04.812952570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:15:04.813887 containerd[1515]: time="2026-03-14T00:15:04.813819902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:15:05.301786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168283133.mount: Deactivated successfully. Mar 14 00:15:05.312877 containerd[1515]: time="2026-03-14T00:15:05.312783473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.314189 containerd[1515]: time="2026-03-14T00:15:05.314118687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Mar 14 00:15:05.315430 containerd[1515]: time="2026-03-14T00:15:05.315292939Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.320140 containerd[1515]: time="2026-03-14T00:15:05.320085442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.321852 containerd[1515]: time="2026-03-14T00:15:05.321764973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.883999ms" Mar 14 00:15:05.321852 containerd[1515]: time="2026-03-14T00:15:05.321817101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:15:05.323386 containerd[1515]: time="2026-03-14T00:15:05.323319987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:15:05.848897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769689954.mount: Deactivated successfully. Mar 14 00:15:06.689721 containerd[1515]: time="2026-03-14T00:15:06.689655879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:06.690978 containerd[1515]: time="2026-03-14T00:15:06.690941799Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718940" Mar 14 00:15:06.691879 containerd[1515]: time="2026-03-14T00:15:06.691832716Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:06.696718 containerd[1515]: time="2026-03-14T00:15:06.694493029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:06.696718 containerd[1515]: time="2026-03-14T00:15:06.696248223Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.372888567s" Mar 14 00:15:06.696718 containerd[1515]: time="2026-03-14T00:15:06.696276205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:15:08.609005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:08.616142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:08.654215 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit session-7.scope)... Mar 14 00:15:08.654244 systemd[1]: Reloading... Mar 14 00:15:08.754751 zram_generator::config[2144]: No configuration found. Mar 14 00:15:08.845540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:08.906010 systemd[1]: Reloading finished in 250 ms. Mar 14 00:15:08.950099 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:15:08.950189 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:15:08.950476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:08.958920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:09.112853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:09.116987 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:15:09.144164 kubelet[2195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:09.144164 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:15:09.144164 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:09.144164 kubelet[2195]: I0314 00:15:09.144073 2195 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:15:09.838468 kubelet[2195]: I0314 00:15:09.838418 2195 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:15:09.838734 kubelet[2195]: I0314 00:15:09.838688 2195 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:15:09.839002 kubelet[2195]: I0314 00:15:09.838976 2195 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:15:09.863638 kubelet[2195]: E0314 00:15:09.863598 2195 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://204.168.141.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:15:09.865404 kubelet[2195]: I0314 00:15:09.865322 2195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:15:09.869716 kubelet[2195]: E0314 00:15:09.868376 2195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:15:09.869716 kubelet[2195]: I0314 00:15:09.868403 2195 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:15:09.871556 kubelet[2195]: I0314 00:15:09.871544 2195 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:15:09.872259 kubelet[2195]: I0314 00:15:09.872230 2195 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:15:09.872416 kubelet[2195]: I0314 00:15:09.872304 2195 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e97f419eb8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:15:09.872506 kubelet[2195]: I0314 00:15:09.872499 2195 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:15:09.872544 kubelet[2195]: I0314 00:15:09.872538 2195 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:15:09.872718 kubelet[2195]: I0314 00:15:09.872690 2195 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:09.876343 kubelet[2195]: I0314 00:15:09.876326 2195 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:15:09.876412 kubelet[2195]: I0314 00:15:09.876402 2195 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:15:09.876483 kubelet[2195]: I0314 00:15:09.876458 2195 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:15:09.876544 kubelet[2195]: I0314 00:15:09.876537 2195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:15:09.884242 kubelet[2195]: E0314 00:15:09.884222 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.141.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:15:09.884370 kubelet[2195]: E0314 00:15:09.884283 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.141.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e97f419eb8&limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:15:09.884602 kubelet[2195]: I0314 00:15:09.884586 2195 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:15:09.885277 kubelet[2195]: I0314 00:15:09.885202 2195 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:15:09.885762 kubelet[2195]: W0314 00:15:09.885751 2195 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:15:09.889979 kubelet[2195]: I0314 00:15:09.889890 2195 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:15:09.889979 kubelet[2195]: I0314 00:15:09.889930 2195 server.go:1289] "Started kubelet" Mar 14 00:15:09.891102 kubelet[2195]: I0314 00:15:09.890063 2195 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:15:09.891102 kubelet[2195]: I0314 00:15:09.890669 2195 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:15:09.892861 kubelet[2195]: I0314 00:15:09.892830 2195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:15:09.893165 kubelet[2195]: I0314 00:15:09.893148 2195 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:15:09.894324 kubelet[2195]: E0314 00:15:09.893274 2195 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.141.220:6443/api/v1/namespaces/default/events\": dial tcp 204.168.141.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-e97f419eb8.189c8ceea5dc0d7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-e97f419eb8,UID:ci-4081-3-6-n-e97f419eb8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e97f419eb8,},FirstTimestamp:2026-03-14 00:15:09.889899902 +0000 UTC m=+0.768364168,LastTimestamp:2026-03-14 00:15:09.889899902 +0000 UTC m=+0.768364168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e97f419eb8,}" Mar 14 00:15:09.896725 kubelet[2195]: I0314 00:15:09.895994 2195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:15:09.898016 kubelet[2195]: I0314 00:15:09.896066 2195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:15:09.899280 kubelet[2195]: I0314 00:15:09.899269 2195 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:15:09.899606 kubelet[2195]: I0314 00:15:09.899594 2195 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:15:09.899693 kubelet[2195]: I0314 00:15:09.899685 2195 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:15:09.899977 kubelet[2195]: E0314 00:15:09.899964 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.141.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:15:09.900336 kubelet[2195]: I0314 00:15:09.900324 2195 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:15:09.900444 kubelet[2195]: I0314 00:15:09.900433 2195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:15:09.901530 kubelet[2195]: I0314 00:15:09.901520 2195 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:15:09.905393 kubelet[2195]: E0314 00:15:09.905361 2195 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" Mar 14 00:15:09.915967 kubelet[2195]: I0314 00:15:09.915932 2195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:15:09.917040 kubelet[2195]: I0314 00:15:09.917028 2195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:15:09.917107 kubelet[2195]: I0314 00:15:09.917099 2195 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:15:09.917152 kubelet[2195]: I0314 00:15:09.917146 2195 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:15:09.917181 kubelet[2195]: I0314 00:15:09.917176 2195 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:15:09.917241 kubelet[2195]: E0314 00:15:09.917230 2195 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:15:09.923118 kubelet[2195]: E0314 00:15:09.923071 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.141.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e97f419eb8?timeout=10s\": dial tcp 204.168.141.220:6443: connect: connection refused" interval="200ms" Mar 14 00:15:09.923593 kubelet[2195]: E0314 00:15:09.923563 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.141.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:15:09.926614 kubelet[2195]: I0314 00:15:09.926603 2195 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:15:09.926763 kubelet[2195]: I0314 00:15:09.926687 2195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:15:09.926763 kubelet[2195]: I0314 00:15:09.926711 2195 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:09.929148 kubelet[2195]: I0314 00:15:09.929139 2195 policy_none.go:49] "None policy: Start" Mar 14 00:15:09.929203 kubelet[2195]: I0314 00:15:09.929196 2195 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:15:09.929267 kubelet[2195]: I0314 00:15:09.929237 2195 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:15:09.934360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:15:09.945556 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:15:09.948481 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:15:09.958531 kubelet[2195]: E0314 00:15:09.958515 2195 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:15:09.958789 kubelet[2195]: I0314 00:15:09.958682 2195 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:15:09.958789 kubelet[2195]: I0314 00:15:09.958708 2195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:15:09.958916 kubelet[2195]: I0314 00:15:09.958904 2195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:15:09.960686 kubelet[2195]: E0314 00:15:09.960633 2195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:15:09.960686 kubelet[2195]: E0314 00:15:09.960667 2195 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-e97f419eb8\" not found" Mar 14 00:15:10.033775 systemd[1]: Created slice kubepods-burstable-pod2fe4d689dc8c4ddd6bef0a0fdfce8ef1.slice - libcontainer container kubepods-burstable-pod2fe4d689dc8c4ddd6bef0a0fdfce8ef1.slice. Mar 14 00:15:10.043649 kubelet[2195]: E0314 00:15:10.042942 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.045914 systemd[1]: Created slice kubepods-burstable-podd5615dfe2020c5e8240543da7c14c645.slice - libcontainer container kubepods-burstable-podd5615dfe2020c5e8240543da7c14c645.slice. Mar 14 00:15:10.053755 kubelet[2195]: E0314 00:15:10.051806 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.057193 systemd[1]: Created slice kubepods-burstable-pod4ebcc93cc265a8c29434196759f63f72.slice - libcontainer container kubepods-burstable-pod4ebcc93cc265a8c29434196759f63f72.slice. Mar 14 00:15:10.060590 kubelet[2195]: E0314 00:15:10.059830 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.062024 kubelet[2195]: I0314 00:15:10.061988 2195 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.062425 kubelet[2195]: E0314 00:15:10.062367 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.141.220:6443/api/v1/nodes\": dial tcp 204.168.141.220:6443: connect: connection refused" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101375 kubelet[2195]: I0314 00:15:10.100972 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101375 kubelet[2195]: I0314 00:15:10.101034 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101375 kubelet[2195]: I0314 00:15:10.101069 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101375 kubelet[2195]: I0314 00:15:10.101095 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101375 kubelet[2195]: I0314 00:15:10.101120 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101686 kubelet[2195]: I0314 00:15:10.101147 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5615dfe2020c5e8240543da7c14c645-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e97f419eb8\" (UID: \"d5615dfe2020c5e8240543da7c14c645\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101686 kubelet[2195]: I0314 00:15:10.101171 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101686 kubelet[2195]: I0314 00:15:10.101202 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.101686 kubelet[2195]: I0314 00:15:10.101224 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.124041 kubelet[2195]: E0314 00:15:10.123992 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.141.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e97f419eb8?timeout=10s\": dial tcp 204.168.141.220:6443: connect: connection refused" interval="400ms" Mar 14 00:15:10.266644 kubelet[2195]: I0314 00:15:10.266463 2195 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.267451 kubelet[2195]: E0314 00:15:10.267060 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.141.220:6443/api/v1/nodes\": dial tcp 204.168.141.220:6443: connect: connection refused" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.344858 containerd[1515]: time="2026-03-14T00:15:10.344801263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e97f419eb8,Uid:2fe4d689dc8c4ddd6bef0a0fdfce8ef1,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:10.353770 containerd[1515]: time="2026-03-14T00:15:10.353478806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e97f419eb8,Uid:d5615dfe2020c5e8240543da7c14c645,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:10.362003 containerd[1515]: time="2026-03-14T00:15:10.361935098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e97f419eb8,Uid:4ebcc93cc265a8c29434196759f63f72,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:10.525740 kubelet[2195]: E0314 00:15:10.525629 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.141.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e97f419eb8?timeout=10s\": dial tcp 204.168.141.220:6443: connect: connection refused" interval="800ms" Mar 14 00:15:10.671252 kubelet[2195]: I0314 00:15:10.670746 2195 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.671416 kubelet[2195]: E0314 00:15:10.671252 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.141.220:6443/api/v1/nodes\": dial tcp 204.168.141.220:6443: connect: connection refused" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:10.830993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188230011.mount: Deactivated successfully. Mar 14 00:15:10.839102 containerd[1515]: time="2026-03-14T00:15:10.839010669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:10.844142 containerd[1515]: time="2026-03-14T00:15:10.844047248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 14 00:15:10.846749 containerd[1515]: time="2026-03-14T00:15:10.845333849Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:10.847117 containerd[1515]: time="2026-03-14T00:15:10.847079729Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:10.850128 containerd[1515]: time="2026-03-14T00:15:10.849969867Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:10.851333 containerd[1515]: time="2026-03-14T00:15:10.851243709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:10.852631 containerd[1515]: time="2026-03-14T00:15:10.852512623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:10.857418 containerd[1515]: time="2026-03-14T00:15:10.857268671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:10.861789 containerd[1515]: time="2026-03-14T00:15:10.860024016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.123104ms" Mar 14 00:15:10.861789 containerd[1515]: time="2026-03-14T00:15:10.861760873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.148046ms" Mar 14 00:15:10.864235 containerd[1515]: time="2026-03-14T00:15:10.863923509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.900038ms" Mar 14 00:15:10.945269 kubelet[2195]: E0314 00:15:10.945210 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.141.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:15:10.959096 kubelet[2195]: E0314 00:15:10.959056 2195 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.141.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.141.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:15:10.977240 containerd[1515]: time="2026-03-14T00:15:10.977063143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:10.977240 containerd[1515]: time="2026-03-14T00:15:10.977062411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:10.977240 containerd[1515]: time="2026-03-14T00:15:10.977108551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:10.977240 containerd[1515]: time="2026-03-14T00:15:10.977119257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.977240 containerd[1515]: time="2026-03-14T00:15:10.977200749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:10.977418 containerd[1515]: time="2026-03-14T00:15:10.977264435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.978890 containerd[1515]: time="2026-03-14T00:15:10.977199016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.978890 containerd[1515]: time="2026-03-14T00:15:10.977470664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.983536 containerd[1515]: time="2026-03-14T00:15:10.979267000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:10.986138 containerd[1515]: time="2026-03-14T00:15:10.986003801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:10.986216 containerd[1515]: time="2026-03-14T00:15:10.986063841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.986332 containerd[1515]: time="2026-03-14T00:15:10.986313095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:11.004151 systemd[1]: Started cri-containerd-81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179.scope - libcontainer container 81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179. Mar 14 00:15:11.008491 systemd[1]: Started cri-containerd-cf5789a3b058f59e34e14bd74fb612eeaefb0e8f45e1c1796db7cfea63b12718.scope - libcontainer container cf5789a3b058f59e34e14bd74fb612eeaefb0e8f45e1c1796db7cfea63b12718. Mar 14 00:15:11.011960 systemd[1]: Started cri-containerd-03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948.scope - libcontainer container 03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948. Mar 14 00:15:11.070431 containerd[1515]: time="2026-03-14T00:15:11.070386656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e97f419eb8,Uid:4ebcc93cc265a8c29434196759f63f72,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf5789a3b058f59e34e14bd74fb612eeaefb0e8f45e1c1796db7cfea63b12718\"" Mar 14 00:15:11.073312 containerd[1515]: time="2026-03-14T00:15:11.073278626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e97f419eb8,Uid:d5615dfe2020c5e8240543da7c14c645,Namespace:kube-system,Attempt:0,} returns sandbox id \"81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179\"" Mar 14 00:15:11.079824 containerd[1515]: time="2026-03-14T00:15:11.079685231Z" level=info msg="CreateContainer within sandbox \"cf5789a3b058f59e34e14bd74fb612eeaefb0e8f45e1c1796db7cfea63b12718\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:15:11.080654 containerd[1515]: time="2026-03-14T00:15:11.080598412Z" level=info msg="CreateContainer within sandbox \"81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:15:11.089030 containerd[1515]: time="2026-03-14T00:15:11.088946811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e97f419eb8,Uid:2fe4d689dc8c4ddd6bef0a0fdfce8ef1,Namespace:kube-system,Attempt:0,} returns sandbox id \"03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948\"" Mar 14 00:15:11.092125 containerd[1515]: time="2026-03-14T00:15:11.092080875Z" level=info msg="CreateContainer within sandbox \"03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:15:11.094524 containerd[1515]: time="2026-03-14T00:15:11.094476300Z" level=info msg="CreateContainer within sandbox \"cf5789a3b058f59e34e14bd74fb612eeaefb0e8f45e1c1796db7cfea63b12718\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41ce78d4ef3368ba4aee50396e3bf0031911604b38e1a516ff892c2dfb5e208e\"" Mar 14 00:15:11.095018 containerd[1515]: time="2026-03-14T00:15:11.094996381Z" level=info msg="CreateContainer within sandbox \"81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc\"" Mar 14 00:15:11.095352 containerd[1515]: time="2026-03-14T00:15:11.095337823Z" level=info msg="StartContainer for \"28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc\"" Mar 14 00:15:11.097458 containerd[1515]: time="2026-03-14T00:15:11.096437774Z" level=info msg="StartContainer for \"41ce78d4ef3368ba4aee50396e3bf0031911604b38e1a516ff892c2dfb5e208e\"" Mar 14 00:15:11.119901 containerd[1515]: time="2026-03-14T00:15:11.119860948Z" level=info msg="CreateContainer within sandbox \"03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456\"" Mar 14 00:15:11.120410 containerd[1515]: time="2026-03-14T00:15:11.120387529Z" level=info msg="StartContainer for \"a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456\"" Mar 14 00:15:11.127876 systemd[1]: Started cri-containerd-28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc.scope - libcontainer container 28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc. Mar 14 00:15:11.138047 systemd[1]: Started cri-containerd-41ce78d4ef3368ba4aee50396e3bf0031911604b38e1a516ff892c2dfb5e208e.scope - libcontainer container 41ce78d4ef3368ba4aee50396e3bf0031911604b38e1a516ff892c2dfb5e208e. Mar 14 00:15:11.162964 systemd[1]: Started cri-containerd-a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456.scope - libcontainer container a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456. Mar 14 00:15:11.189063 containerd[1515]: time="2026-03-14T00:15:11.189018853Z" level=info msg="StartContainer for \"28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc\" returns successfully" Mar 14 00:15:11.194230 containerd[1515]: time="2026-03-14T00:15:11.194201491Z" level=info msg="StartContainer for \"41ce78d4ef3368ba4aee50396e3bf0031911604b38e1a516ff892c2dfb5e208e\" returns successfully" Mar 14 00:15:11.221017 containerd[1515]: time="2026-03-14T00:15:11.220797708Z" level=info msg="StartContainer for \"a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456\" returns successfully" Mar 14 00:15:11.474775 kubelet[2195]: I0314 00:15:11.473357 2195 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:11.938568 kubelet[2195]: E0314 00:15:11.938537 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:11.940182 kubelet[2195]: E0314 00:15:11.940163 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:11.941537 kubelet[2195]: E0314 00:15:11.941518 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:12.885008 kubelet[2195]: I0314 00:15:12.884863 2195 apiserver.go:52] "Watching apiserver" Mar 14 00:15:12.890888 kubelet[2195]: E0314 00:15:12.890834 2195 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:12.900022 kubelet[2195]: I0314 00:15:12.899974 2195 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:15:12.944190 kubelet[2195]: E0314 00:15:12.944029 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:12.944466 kubelet[2195]: E0314 00:15:12.944354 2195 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e97f419eb8\" not found" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:12.975685 kubelet[2195]: I0314 00:15:12.975619 2195 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.005717 kubelet[2195]: I0314 00:15:13.005669 2195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.025838 kubelet[2195]: E0314 00:15:13.025623 2195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-e97f419eb8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.025838 kubelet[2195]: I0314 00:15:13.025654 2195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.031004 kubelet[2195]: E0314 00:15:13.030848 2195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.031004 kubelet[2195]: I0314 00:15:13.030869 2195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:13.034987 kubelet[2195]: E0314 00:15:13.034960 2195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.083208 systemd[1]: Reloading requested from client PID 2484 ('systemctl') (unit session-7.scope)... Mar 14 00:15:15.083223 systemd[1]: Reloading... Mar 14 00:15:15.170729 zram_generator::config[2524]: No configuration found. Mar 14 00:15:15.270761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:15.354879 systemd[1]: Reloading finished in 271 ms. Mar 14 00:15:15.397226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:15.420266 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:15:15.420583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:15.420673 systemd[1]: kubelet.service: Consumed 1.172s CPU time, 132.7M memory peak, 0B memory swap peak. Mar 14 00:15:15.426355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:15.561321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:15.565429 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:15:15.600831 kubelet[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:15.602022 kubelet[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:15:15.602022 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:15.602022 kubelet[2575]: I0314 00:15:15.600926 2575 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:15:15.605655 kubelet[2575]: I0314 00:15:15.605354 2575 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:15:15.605655 kubelet[2575]: I0314 00:15:15.605369 2575 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:15:15.605655 kubelet[2575]: I0314 00:15:15.605477 2575 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:15:15.606388 kubelet[2575]: I0314 00:15:15.606372 2575 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:15:15.610712 kubelet[2575]: I0314 00:15:15.609564 2575 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:15:15.612483 kubelet[2575]: E0314 00:15:15.612466 2575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:15:15.612578 kubelet[2575]: I0314 00:15:15.612570 2575 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:15:15.615873 kubelet[2575]: I0314 00:15:15.615850 2575 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:15:15.616060 kubelet[2575]: I0314 00:15:15.616039 2575 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:15:15.616163 kubelet[2575]: I0314 00:15:15.616056 2575 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e97f419eb8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:15:15.616228 kubelet[2575]: I0314 00:15:15.616165 2575 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:15:15.616228 kubelet[2575]: I0314 00:15:15.616172 2575 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:15:15.616228 kubelet[2575]: I0314 00:15:15.616212 2575 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:15.616845 kubelet[2575]: I0314 00:15:15.616342 2575 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:15:15.616845 kubelet[2575]: I0314 00:15:15.616353 2575 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:15:15.616845 kubelet[2575]: I0314 00:15:15.616373 2575 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:15:15.616845 kubelet[2575]: I0314 00:15:15.616388 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:15:15.619718 kubelet[2575]: I0314 00:15:15.619025 2575 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:15:15.620166 kubelet[2575]: I0314 00:15:15.620143 2575 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:15:15.625382 kubelet[2575]: I0314 00:15:15.625366 2575 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:15:15.625494 kubelet[2575]: I0314 00:15:15.625488 2575 server.go:1289] "Started kubelet" Mar 14 00:15:15.626017 kubelet[2575]: I0314 00:15:15.625825 2575 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:15:15.626447 kubelet[2575]: I0314 00:15:15.626430 2575 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:15:15.626956 kubelet[2575]: I0314 00:15:15.626643 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:15:15.627324 kubelet[2575]: I0314 00:15:15.627230 2575 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:15:15.630967 kubelet[2575]: I0314 00:15:15.630952 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:15:15.632376 kubelet[2575]: E0314 00:15:15.632341 2575 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:15:15.633499 kubelet[2575]: I0314 00:15:15.633478 2575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:15:15.636562 kubelet[2575]: I0314 00:15:15.636529 2575 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:15:15.636634 kubelet[2575]: I0314 00:15:15.636626 2575 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:15:15.637445 kubelet[2575]: I0314 00:15:15.636738 2575 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:15:15.637957 kubelet[2575]: I0314 00:15:15.637934 2575 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:15:15.643079 kubelet[2575]: I0314 00:15:15.642885 2575 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:15:15.643079 kubelet[2575]: I0314 00:15:15.642902 2575 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:15:15.647228 kubelet[2575]: I0314 00:15:15.647194 2575 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:15:15.648360 kubelet[2575]: I0314 00:15:15.648345 2575 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:15:15.648430 kubelet[2575]: I0314 00:15:15.648424 2575 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:15:15.648475 kubelet[2575]: I0314 00:15:15.648469 2575 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:15:15.648518 kubelet[2575]: I0314 00:15:15.648512 2575 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:15:15.648593 kubelet[2575]: E0314 00:15:15.648582 2575 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:15:15.682126 kubelet[2575]: I0314 00:15:15.682100 2575 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:15:15.682126 kubelet[2575]: I0314 00:15:15.682113 2575 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:15:15.682126 kubelet[2575]: I0314 00:15:15.682129 2575 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:15.682264 kubelet[2575]: I0314 00:15:15.682257 2575 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:15:15.682283 kubelet[2575]: I0314 00:15:15.682265 2575 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:15:15.682283 kubelet[2575]: I0314 00:15:15.682277 2575 policy_none.go:49] "None policy: Start" Mar 14 00:15:15.682324 kubelet[2575]: I0314 00:15:15.682286 2575 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:15:15.682324 kubelet[2575]: I0314 00:15:15.682294 2575 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:15:15.682366 kubelet[2575]: I0314 00:15:15.682355 2575 state_mem.go:75] "Updated machine memory state" Mar 14 00:15:15.685741 kubelet[2575]: E0314 00:15:15.685572 2575 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:15:15.686039 kubelet[2575]: I0314 00:15:15.685866 2575 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:15:15.686155 kubelet[2575]: I0314 00:15:15.686101 2575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:15:15.687176 kubelet[2575]: I0314 00:15:15.687120 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:15:15.687791 kubelet[2575]: E0314 00:15:15.687498 2575 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:15:15.750392 kubelet[2575]: I0314 00:15:15.750338 2575 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.753719 kubelet[2575]: I0314 00:15:15.751188 2575 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.753719 kubelet[2575]: I0314 00:15:15.751463 2575 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.792636 kubelet[2575]: I0314 00:15:15.792565 2575 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.800252 kubelet[2575]: I0314 00:15:15.800192 2575 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.800363 kubelet[2575]: I0314 00:15:15.800272 2575 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.938781 kubelet[2575]: I0314 00:15:15.938283 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.938781 kubelet[2575]: I0314 00:15:15.938338 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.938781 kubelet[2575]: I0314 00:15:15.938368 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.938781 kubelet[2575]: I0314 00:15:15.938393 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.938781 kubelet[2575]: I0314 00:15:15.938420 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fe4d689dc8c4ddd6bef0a0fdfce8ef1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e97f419eb8\" (UID: \"2fe4d689dc8c4ddd6bef0a0fdfce8ef1\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.939292 kubelet[2575]: I0314 00:15:15.938448 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5615dfe2020c5e8240543da7c14c645-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e97f419eb8\" (UID: \"d5615dfe2020c5e8240543da7c14c645\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.939292 kubelet[2575]: I0314 00:15:15.938488 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.939292 kubelet[2575]: I0314 00:15:15.938512 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:15.939292 kubelet[2575]: I0314 00:15:15.938566 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ebcc93cc265a8c29434196759f63f72-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" (UID: \"4ebcc93cc265a8c29434196759f63f72\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:16.101377 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:15:16.102189 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:15:16.524647 sudo[2614]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:16.617522 kubelet[2575]: I0314 00:15:16.617117 2575 apiserver.go:52] "Watching apiserver" Mar 14 00:15:16.636854 kubelet[2575]: I0314 00:15:16.636796 2575 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:15:16.671473 kubelet[2575]: I0314 00:15:16.670622 2575 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:16.681863 kubelet[2575]: E0314 00:15:16.681829 2575 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e97f419eb8\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" Mar 14 00:15:16.704743 kubelet[2575]: I0314 00:15:16.704264 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" podStartSLOduration=1.704246129 podStartE2EDuration="1.704246129s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:16.69703026 +0000 UTC m=+1.126975894" watchObservedRunningTime="2026-03-14 00:15:16.704246129 +0000 UTC m=+1.134191753" Mar 14 00:15:16.718312 kubelet[2575]: I0314 00:15:16.717891 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e97f419eb8" podStartSLOduration=1.717873392 podStartE2EDuration="1.717873392s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:16.704905799 +0000 UTC m=+1.134851423" watchObservedRunningTime="2026-03-14 00:15:16.717873392 +0000 UTC m=+1.147819026" Mar 14 00:15:16.727460 kubelet[2575]: I0314 00:15:16.727300 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e97f419eb8" podStartSLOduration=1.7272847869999999 podStartE2EDuration="1.727284787s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:16.718675636 +0000 UTC m=+1.148621260" watchObservedRunningTime="2026-03-14 00:15:16.727284787 +0000 UTC m=+1.157230421" Mar 14 00:15:17.834095 sudo[1712]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:17.954501 sshd[1709]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:17.962273 systemd[1]: sshd@6-204.168.141.220:22-68.220.241.50:59930.service: Deactivated successfully. Mar 14 00:15:17.965671 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:15:17.966229 systemd[1]: session-7.scope: Consumed 3.900s CPU time, 160.8M memory peak, 0B memory swap peak. Mar 14 00:15:17.967292 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:15:17.969226 systemd-logind[1485]: Removed session 7. Mar 14 00:15:20.910331 kubelet[2575]: I0314 00:15:20.910290 2575 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:15:20.911374 containerd[1515]: time="2026-03-14T00:15:20.911253957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:15:20.911790 kubelet[2575]: I0314 00:15:20.911597 2575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:15:22.059774 systemd[1]: Created slice kubepods-besteffort-pod4da8f17a_f185_4565_adbb_38957d966591.slice - libcontainer container kubepods-besteffort-pod4da8f17a_f185_4565_adbb_38957d966591.slice. Mar 14 00:15:22.062533 systemd[1]: Created slice kubepods-burstable-podb8472b5a_90cc_4011_b9f9_579b6f6b71e9.slice - libcontainer container kubepods-burstable-podb8472b5a_90cc_4011_b9f9_579b6f6b71e9.slice. Mar 14 00:15:22.084885 kubelet[2575]: I0314 00:15:22.084845 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hostproc\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.084885 kubelet[2575]: I0314 00:15:22.084881 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-cgroup\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.084885 kubelet[2575]: I0314 00:15:22.084895 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-xtables-lock\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.085907 kubelet[2575]: I0314 00:15:22.085887 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4da8f17a-f185-4565-adbb-38957d966591-lib-modules\") pod \"kube-proxy-8plww\" (UID: \"4da8f17a-f185-4565-adbb-38957d966591\") " pod="kube-system/kube-proxy-8plww" Mar 14 00:15:22.085947 kubelet[2575]: I0314 00:15:22.085935 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-clustermesh-secrets\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.085975 kubelet[2575]: I0314 00:15:22.085951 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-config-path\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.085975 kubelet[2575]: I0314 00:15:22.085963 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-net\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086032 kubelet[2575]: I0314 00:15:22.085976 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-kernel\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086050 kubelet[2575]: I0314 00:15:22.086034 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hubble-tls\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086050 kubelet[2575]: I0314 00:15:22.086044 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhj9z\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-kube-api-access-qhj9z\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086105 kubelet[2575]: I0314 00:15:22.086094 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-bpf-maps\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086124 kubelet[2575]: I0314 00:15:22.086108 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-etc-cni-netd\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086139 kubelet[2575]: I0314 00:15:22.086124 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-lib-modules\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086157 kubelet[2575]: I0314 00:15:22.086135 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4da8f17a-f185-4565-adbb-38957d966591-kube-proxy\") pod \"kube-proxy-8plww\" (UID: \"4da8f17a-f185-4565-adbb-38957d966591\") " pod="kube-system/kube-proxy-8plww" Mar 14 00:15:22.086199 kubelet[2575]: I0314 00:15:22.086186 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4da8f17a-f185-4565-adbb-38957d966591-xtables-lock\") pod \"kube-proxy-8plww\" (UID: \"4da8f17a-f185-4565-adbb-38957d966591\") " pod="kube-system/kube-proxy-8plww" Mar 14 00:15:22.086221 kubelet[2575]: I0314 00:15:22.086200 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cni-path\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.086221 kubelet[2575]: I0314 00:15:22.086212 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrn7p\" (UniqueName: \"kubernetes.io/projected/4da8f17a-f185-4565-adbb-38957d966591-kube-api-access-qrn7p\") pod \"kube-proxy-8plww\" (UID: \"4da8f17a-f185-4565-adbb-38957d966591\") " pod="kube-system/kube-proxy-8plww" Mar 14 00:15:22.086254 kubelet[2575]: I0314 00:15:22.086221 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-run\") pod \"cilium-tvmpt\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " pod="kube-system/cilium-tvmpt" Mar 14 00:15:22.109636 systemd[1]: Created slice kubepods-besteffort-pod3f93cf3e_e6ef_4e01_a82a_c60413993d66.slice - libcontainer container kubepods-besteffort-pod3f93cf3e_e6ef_4e01_a82a_c60413993d66.slice. Mar 14 00:15:22.188403 kubelet[2575]: I0314 00:15:22.186667 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmths\" (UniqueName: \"kubernetes.io/projected/3f93cf3e-e6ef-4e01-a82a-c60413993d66-kube-api-access-pmths\") pod \"cilium-operator-6c4d7847fc-h9vhj\" (UID: \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\") " pod="kube-system/cilium-operator-6c4d7847fc-h9vhj" Mar 14 00:15:22.188403 kubelet[2575]: I0314 00:15:22.186794 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f93cf3e-e6ef-4e01-a82a-c60413993d66-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h9vhj\" (UID: \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\") " pod="kube-system/cilium-operator-6c4d7847fc-h9vhj" Mar 14 00:15:22.373321 containerd[1515]: time="2026-03-14T00:15:22.371613693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8plww,Uid:4da8f17a-f185-4565-adbb-38957d966591,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:22.373321 containerd[1515]: time="2026-03-14T00:15:22.373078882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvmpt,Uid:b8472b5a-90cc-4011-b9f9-579b6f6b71e9,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:22.413496 containerd[1515]: time="2026-03-14T00:15:22.413274870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9vhj,Uid:3f93cf3e-e6ef-4e01-a82a-c60413993d66,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:22.424414 containerd[1515]: time="2026-03-14T00:15:22.424284904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:22.424761 containerd[1515]: time="2026-03-14T00:15:22.424387458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:22.426273 containerd[1515]: time="2026-03-14T00:15:22.424672115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.426273 containerd[1515]: time="2026-03-14T00:15:22.425442912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:22.426273 containerd[1515]: time="2026-03-14T00:15:22.425622411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:22.426273 containerd[1515]: time="2026-03-14T00:15:22.425727589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.426273 containerd[1515]: time="2026-03-14T00:15:22.425893118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.431101 containerd[1515]: time="2026-03-14T00:15:22.428039779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.446862 systemd[1]: Started cri-containerd-ceeef625dc035899f6504c2c0cffb3a1071b58712b2133bf9e0509379375e673.scope - libcontainer container ceeef625dc035899f6504c2c0cffb3a1071b58712b2133bf9e0509379375e673. Mar 14 00:15:22.451907 systemd[1]: Started cri-containerd-718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe.scope - libcontainer container 718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe. Mar 14 00:15:22.457903 containerd[1515]: time="2026-03-14T00:15:22.457614536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:22.458178 containerd[1515]: time="2026-03-14T00:15:22.458114316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:22.458433 containerd[1515]: time="2026-03-14T00:15:22.458151692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.458660 containerd[1515]: time="2026-03-14T00:15:22.458334387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:22.483812 systemd[1]: Started cri-containerd-3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5.scope - libcontainer container 3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5. Mar 14 00:15:22.492828 containerd[1515]: time="2026-03-14T00:15:22.492792302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8plww,Uid:4da8f17a-f185-4565-adbb-38957d966591,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceeef625dc035899f6504c2c0cffb3a1071b58712b2133bf9e0509379375e673\"" Mar 14 00:15:22.494013 containerd[1515]: time="2026-03-14T00:15:22.493959334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvmpt,Uid:b8472b5a-90cc-4011-b9f9-579b6f6b71e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\"" Mar 14 00:15:22.500036 containerd[1515]: time="2026-03-14T00:15:22.499866730Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:15:22.500565 containerd[1515]: time="2026-03-14T00:15:22.500544927Z" level=info msg="CreateContainer within sandbox \"ceeef625dc035899f6504c2c0cffb3a1071b58712b2133bf9e0509379375e673\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:15:22.513991 containerd[1515]: time="2026-03-14T00:15:22.513955495Z" level=info msg="CreateContainer within sandbox \"ceeef625dc035899f6504c2c0cffb3a1071b58712b2133bf9e0509379375e673\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b8b0e5235b0578b71f6a0e49eabd1f7e34127618217aa67874d65ce5fd97cf1d\"" Mar 14 00:15:22.514819 containerd[1515]: time="2026-03-14T00:15:22.514614173Z" level=info msg="StartContainer for \"b8b0e5235b0578b71f6a0e49eabd1f7e34127618217aa67874d65ce5fd97cf1d\"" Mar 14 00:15:22.529690 containerd[1515]: time="2026-03-14T00:15:22.529654466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9vhj,Uid:3f93cf3e-e6ef-4e01-a82a-c60413993d66,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\"" Mar 14 00:15:22.538826 systemd[1]: Started cri-containerd-b8b0e5235b0578b71f6a0e49eabd1f7e34127618217aa67874d65ce5fd97cf1d.scope - libcontainer container b8b0e5235b0578b71f6a0e49eabd1f7e34127618217aa67874d65ce5fd97cf1d. Mar 14 00:15:22.561827 containerd[1515]: time="2026-03-14T00:15:22.561756796Z" level=info msg="StartContainer for \"b8b0e5235b0578b71f6a0e49eabd1f7e34127618217aa67874d65ce5fd97cf1d\" returns successfully" Mar 14 00:15:22.689111 kubelet[2575]: I0314 00:15:22.689071 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8plww" podStartSLOduration=0.68894832 podStartE2EDuration="688.94832ms" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:22.688360577 +0000 UTC m=+7.118306202" watchObservedRunningTime="2026-03-14 00:15:22.68894832 +0000 UTC m=+7.118893944" Mar 14 00:15:27.096748 update_engine[1487]: I20260314 00:15:27.096586 1487 update_attempter.cc:509] Updating boot flags... Mar 14 00:15:27.174775 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2957) Mar 14 00:15:27.240751 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2959) Mar 14 00:15:27.302959 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2959) Mar 14 00:15:30.514142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684389225.mount: Deactivated successfully. Mar 14 00:15:32.507572 containerd[1515]: time="2026-03-14T00:15:32.507505856Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:32.508533 containerd[1515]: time="2026-03-14T00:15:32.508427572Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:15:32.509436 containerd[1515]: time="2026-03-14T00:15:32.509373242Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:32.511239 containerd[1515]: time="2026-03-14T00:15:32.510430758Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.010537999s" Mar 14 00:15:32.511239 containerd[1515]: time="2026-03-14T00:15:32.510477188Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:15:32.512476 containerd[1515]: time="2026-03-14T00:15:32.511916614Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:15:32.514693 containerd[1515]: time="2026-03-14T00:15:32.514539746Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:15:32.537006 containerd[1515]: time="2026-03-14T00:15:32.536950897Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\"" Mar 14 00:15:32.537991 containerd[1515]: time="2026-03-14T00:15:32.537854724Z" level=info msg="StartContainer for \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\"" Mar 14 00:15:32.564992 systemd[1]: Started cri-containerd-90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7.scope - libcontainer container 90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7. Mar 14 00:15:32.590599 containerd[1515]: time="2026-03-14T00:15:32.590227193Z" level=info msg="StartContainer for \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\" returns successfully" Mar 14 00:15:32.598166 systemd[1]: cri-containerd-90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7.scope: Deactivated successfully. Mar 14 00:15:32.724714 containerd[1515]: time="2026-03-14T00:15:32.724625434Z" level=info msg="shim disconnected" id=90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7 namespace=k8s.io Mar 14 00:15:32.724714 containerd[1515]: time="2026-03-14T00:15:32.724689228Z" level=warning msg="cleaning up after shim disconnected" id=90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7 namespace=k8s.io Mar 14 00:15:32.724714 containerd[1515]: time="2026-03-14T00:15:32.724717741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:33.527944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7-rootfs.mount: Deactivated successfully. Mar 14 00:15:33.721624 containerd[1515]: time="2026-03-14T00:15:33.720106562Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:15:33.748960 containerd[1515]: time="2026-03-14T00:15:33.748911084Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\"" Mar 14 00:15:33.751200 containerd[1515]: time="2026-03-14T00:15:33.751147460Z" level=info msg="StartContainer for \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\"" Mar 14 00:15:33.789827 systemd[1]: Started cri-containerd-0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1.scope - libcontainer container 0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1. Mar 14 00:15:33.813230 containerd[1515]: time="2026-03-14T00:15:33.812715399Z" level=info msg="StartContainer for \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\" returns successfully" Mar 14 00:15:33.823654 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:15:33.824080 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:33.824151 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:33.830857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:33.831036 systemd[1]: cri-containerd-0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1.scope: Deactivated successfully. Mar 14 00:15:33.850607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:33.852345 containerd[1515]: time="2026-03-14T00:15:33.852257620Z" level=info msg="shim disconnected" id=0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1 namespace=k8s.io Mar 14 00:15:33.852345 containerd[1515]: time="2026-03-14T00:15:33.852320944Z" level=warning msg="cleaning up after shim disconnected" id=0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1 namespace=k8s.io Mar 14 00:15:33.852345 containerd[1515]: time="2026-03-14T00:15:33.852328155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:34.524060 systemd[1]: run-containerd-runc-k8s.io-0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1-runc.DCCkMx.mount: Deactivated successfully. Mar 14 00:15:34.524169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1-rootfs.mount: Deactivated successfully. Mar 14 00:15:34.727364 containerd[1515]: time="2026-03-14T00:15:34.727308170Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:15:34.753914 containerd[1515]: time="2026-03-14T00:15:34.753858634Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\"" Mar 14 00:15:34.755751 containerd[1515]: time="2026-03-14T00:15:34.755690548Z" level=info msg="StartContainer for \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\"" Mar 14 00:15:34.796946 systemd[1]: Started cri-containerd-b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37.scope - libcontainer container b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37. Mar 14 00:15:34.823381 systemd[1]: cri-containerd-b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37.scope: Deactivated successfully. Mar 14 00:15:34.824480 containerd[1515]: time="2026-03-14T00:15:34.824334572Z" level=info msg="StartContainer for \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\" returns successfully" Mar 14 00:15:34.854082 containerd[1515]: time="2026-03-14T00:15:34.854014527Z" level=info msg="shim disconnected" id=b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37 namespace=k8s.io Mar 14 00:15:34.854082 containerd[1515]: time="2026-03-14T00:15:34.854061857Z" level=warning msg="cleaning up after shim disconnected" id=b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37 namespace=k8s.io Mar 14 00:15:34.854082 containerd[1515]: time="2026-03-14T00:15:34.854069048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:35.525807 systemd[1]: run-containerd-runc-k8s.io-b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37-runc.epaIQ3.mount: Deactivated successfully. Mar 14 00:15:35.526059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37-rootfs.mount: Deactivated successfully. Mar 14 00:15:35.750892 containerd[1515]: time="2026-03-14T00:15:35.749899629Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:15:35.770656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642431950.mount: Deactivated successfully. Mar 14 00:15:35.781366 containerd[1515]: time="2026-03-14T00:15:35.781248791Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\"" Mar 14 00:15:35.782495 containerd[1515]: time="2026-03-14T00:15:35.782134152Z" level=info msg="StartContainer for \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\"" Mar 14 00:15:35.807830 systemd[1]: Started cri-containerd-9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826.scope - libcontainer container 9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826. Mar 14 00:15:35.829502 systemd[1]: cri-containerd-9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826.scope: Deactivated successfully. Mar 14 00:15:35.830613 containerd[1515]: time="2026-03-14T00:15:35.830530404Z" level=info msg="StartContainer for \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\" returns successfully" Mar 14 00:15:35.853452 containerd[1515]: time="2026-03-14T00:15:35.853373387Z" level=info msg="shim disconnected" id=9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826 namespace=k8s.io Mar 14 00:15:35.853452 containerd[1515]: time="2026-03-14T00:15:35.853427017Z" level=warning msg="cleaning up after shim disconnected" id=9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826 namespace=k8s.io Mar 14 00:15:35.853452 containerd[1515]: time="2026-03-14T00:15:35.853433978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:36.525726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826-rootfs.mount: Deactivated successfully. Mar 14 00:15:36.738035 containerd[1515]: time="2026-03-14T00:15:36.737955878Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:15:36.761551 containerd[1515]: time="2026-03-14T00:15:36.761504536Z" level=info msg="CreateContainer within sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\"" Mar 14 00:15:36.762015 containerd[1515]: time="2026-03-14T00:15:36.761950692Z" level=info msg="StartContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\"" Mar 14 00:15:36.797823 systemd[1]: Started cri-containerd-15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e.scope - libcontainer container 15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e. Mar 14 00:15:36.822534 containerd[1515]: time="2026-03-14T00:15:36.822416042Z" level=info msg="StartContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" returns successfully" Mar 14 00:15:36.994040 kubelet[2575]: I0314 00:15:36.994003 2575 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:15:37.033007 systemd[1]: Created slice kubepods-burstable-pod4088751f_ad66_4eb2_adb3_dab5eacb9316.slice - libcontainer container kubepods-burstable-pod4088751f_ad66_4eb2_adb3_dab5eacb9316.slice. Mar 14 00:15:37.041629 systemd[1]: Created slice kubepods-burstable-podb8d8fdce_bf2d_47d0_a780_eacab63901e8.slice - libcontainer container kubepods-burstable-podb8d8fdce_bf2d_47d0_a780_eacab63901e8.slice. Mar 14 00:15:37.083626 kubelet[2575]: I0314 00:15:37.083511 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4088751f-ad66-4eb2-adb3-dab5eacb9316-config-volume\") pod \"coredns-674b8bbfcf-kx9jk\" (UID: \"4088751f-ad66-4eb2-adb3-dab5eacb9316\") " pod="kube-system/coredns-674b8bbfcf-kx9jk" Mar 14 00:15:37.083626 kubelet[2575]: I0314 00:15:37.083547 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qptn\" (UniqueName: \"kubernetes.io/projected/4088751f-ad66-4eb2-adb3-dab5eacb9316-kube-api-access-7qptn\") pod \"coredns-674b8bbfcf-kx9jk\" (UID: \"4088751f-ad66-4eb2-adb3-dab5eacb9316\") " pod="kube-system/coredns-674b8bbfcf-kx9jk" Mar 14 00:15:37.083626 kubelet[2575]: I0314 00:15:37.083565 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8d8fdce-bf2d-47d0-a780-eacab63901e8-config-volume\") pod \"coredns-674b8bbfcf-lrzl6\" (UID: \"b8d8fdce-bf2d-47d0-a780-eacab63901e8\") " pod="kube-system/coredns-674b8bbfcf-lrzl6" Mar 14 00:15:37.083626 kubelet[2575]: I0314 00:15:37.083578 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8q67\" (UniqueName: \"kubernetes.io/projected/b8d8fdce-bf2d-47d0-a780-eacab63901e8-kube-api-access-x8q67\") pod \"coredns-674b8bbfcf-lrzl6\" (UID: \"b8d8fdce-bf2d-47d0-a780-eacab63901e8\") " pod="kube-system/coredns-674b8bbfcf-lrzl6" Mar 14 00:15:37.341013 containerd[1515]: time="2026-03-14T00:15:37.340628398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kx9jk,Uid:4088751f-ad66-4eb2-adb3-dab5eacb9316,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:37.346080 containerd[1515]: time="2026-03-14T00:15:37.346060433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lrzl6,Uid:b8d8fdce-bf2d-47d0-a780-eacab63901e8,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:37.478462 containerd[1515]: time="2026-03-14T00:15:37.478414424Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:37.479247 containerd[1515]: time="2026-03-14T00:15:37.479091946Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:15:37.480213 containerd[1515]: time="2026-03-14T00:15:37.480040474Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:37.481023 containerd[1515]: time="2026-03-14T00:15:37.481002210Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.969061079s" Mar 14 00:15:37.481066 containerd[1515]: time="2026-03-14T00:15:37.481026547Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:15:37.484168 containerd[1515]: time="2026-03-14T00:15:37.484144845Z" level=info msg="CreateContainer within sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:15:37.506727 containerd[1515]: time="2026-03-14T00:15:37.506680301Z" level=info msg="CreateContainer within sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\"" Mar 14 00:15:37.508187 containerd[1515]: time="2026-03-14T00:15:37.507111836Z" level=info msg="StartContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\"" Mar 14 00:15:37.527813 systemd[1]: run-containerd-runc-k8s.io-15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e-runc.SXNmfN.mount: Deactivated successfully. Mar 14 00:15:37.539834 systemd[1]: Started cri-containerd-16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd.scope - libcontainer container 16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd. Mar 14 00:15:37.561837 containerd[1515]: time="2026-03-14T00:15:37.561785732Z" level=info msg="StartContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" returns successfully" Mar 14 00:15:37.758986 kubelet[2575]: I0314 00:15:37.758929 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tvmpt" podStartSLOduration=5.7466045690000005 podStartE2EDuration="15.758916653s" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="2026-03-14 00:15:22.499005407 +0000 UTC m=+6.928951031" lastFinishedPulling="2026-03-14 00:15:32.511317491 +0000 UTC m=+16.941263115" observedRunningTime="2026-03-14 00:15:37.758337447 +0000 UTC m=+22.188283091" watchObservedRunningTime="2026-03-14 00:15:37.758916653 +0000 UTC m=+22.188862287" Mar 14 00:15:37.759251 kubelet[2575]: I0314 00:15:37.759064 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h9vhj" podStartSLOduration=0.80850118 podStartE2EDuration="15.759045976s" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="2026-03-14 00:15:22.531067917 +0000 UTC m=+6.961013551" lastFinishedPulling="2026-03-14 00:15:37.481612713 +0000 UTC m=+21.911558347" observedRunningTime="2026-03-14 00:15:37.745471617 +0000 UTC m=+22.175417241" watchObservedRunningTime="2026-03-14 00:15:37.759045976 +0000 UTC m=+22.188991600" Mar 14 00:15:40.992944 systemd-networkd[1419]: cilium_host: Link UP Mar 14 00:15:40.993244 systemd-networkd[1419]: cilium_net: Link UP Mar 14 00:15:40.993527 systemd-networkd[1419]: cilium_net: Gained carrier Mar 14 00:15:40.996425 systemd-networkd[1419]: cilium_host: Gained carrier Mar 14 00:15:41.118492 systemd-networkd[1419]: cilium_vxlan: Link UP Mar 14 00:15:41.118612 systemd-networkd[1419]: cilium_vxlan: Gained carrier Mar 14 00:15:41.293729 kernel: NET: Registered PF_ALG protocol family Mar 14 00:15:41.687868 systemd-networkd[1419]: cilium_host: Gained IPv6LL Mar 14 00:15:41.752581 systemd-networkd[1419]: cilium_net: Gained IPv6LL Mar 14 00:15:41.855742 systemd-networkd[1419]: lxc_health: Link UP Mar 14 00:15:41.864671 systemd-networkd[1419]: lxc_health: Gained carrier Mar 14 00:15:42.427885 systemd-networkd[1419]: lxc45c9fb3cdf18: Link UP Mar 14 00:15:42.437434 systemd-networkd[1419]: lxc82cc6e817472: Link UP Mar 14 00:15:42.446771 kernel: eth0: renamed from tmp62a03 Mar 14 00:15:42.452772 kernel: eth0: renamed from tmpa6c1a Mar 14 00:15:42.459937 systemd-networkd[1419]: lxc45c9fb3cdf18: Gained carrier Mar 14 00:15:42.460202 systemd-networkd[1419]: lxc82cc6e817472: Gained carrier Mar 14 00:15:42.712039 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Mar 14 00:15:43.096090 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 14 00:15:43.480035 systemd-networkd[1419]: lxc45c9fb3cdf18: Gained IPv6LL Mar 14 00:15:43.673050 systemd-networkd[1419]: lxc82cc6e817472: Gained IPv6LL Mar 14 00:15:45.066620 containerd[1515]: time="2026-03-14T00:15:45.066445887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:45.066620 containerd[1515]: time="2026-03-14T00:15:45.066547309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:45.066620 containerd[1515]: time="2026-03-14T00:15:45.066570554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:45.067984 containerd[1515]: time="2026-03-14T00:15:45.066676342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:45.106888 systemd[1]: Started cri-containerd-a6c1a61e89b3be3ad8e80f81c681dc68f76a7fad436fef5db8022a7f62ff9cd5.scope - libcontainer container a6c1a61e89b3be3ad8e80f81c681dc68f76a7fad436fef5db8022a7f62ff9cd5. Mar 14 00:15:45.113412 containerd[1515]: time="2026-03-14T00:15:45.113198310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:45.113412 containerd[1515]: time="2026-03-14T00:15:45.113273683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:45.113412 containerd[1515]: time="2026-03-14T00:15:45.113293443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:45.114898 containerd[1515]: time="2026-03-14T00:15:45.114177646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:45.137887 systemd[1]: Started cri-containerd-62a03221c49709ded37bd4e9b55a08d793c3855c27a8c1d42c3ab5f54e1d8104.scope - libcontainer container 62a03221c49709ded37bd4e9b55a08d793c3855c27a8c1d42c3ab5f54e1d8104. Mar 14 00:15:45.177095 containerd[1515]: time="2026-03-14T00:15:45.176949487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kx9jk,Uid:4088751f-ad66-4eb2-adb3-dab5eacb9316,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6c1a61e89b3be3ad8e80f81c681dc68f76a7fad436fef5db8022a7f62ff9cd5\"" Mar 14 00:15:45.184363 containerd[1515]: time="2026-03-14T00:15:45.183839859Z" level=info msg="CreateContainer within sandbox \"a6c1a61e89b3be3ad8e80f81c681dc68f76a7fad436fef5db8022a7f62ff9cd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:45.207645 containerd[1515]: time="2026-03-14T00:15:45.207608633Z" level=info msg="CreateContainer within sandbox \"a6c1a61e89b3be3ad8e80f81c681dc68f76a7fad436fef5db8022a7f62ff9cd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0913d4e5add3aea0e2fa62266e1d95c2ce2e36ab6f8f13f87c0e767c5c7dee76\"" Mar 14 00:15:45.209928 containerd[1515]: time="2026-03-14T00:15:45.209908095Z" level=info msg="StartContainer for \"0913d4e5add3aea0e2fa62266e1d95c2ce2e36ab6f8f13f87c0e767c5c7dee76\"" Mar 14 00:15:45.212899 containerd[1515]: time="2026-03-14T00:15:45.212879362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lrzl6,Uid:b8d8fdce-bf2d-47d0-a780-eacab63901e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a03221c49709ded37bd4e9b55a08d793c3855c27a8c1d42c3ab5f54e1d8104\"" Mar 14 00:15:45.216818 containerd[1515]: time="2026-03-14T00:15:45.216798177Z" level=info msg="CreateContainer within sandbox \"62a03221c49709ded37bd4e9b55a08d793c3855c27a8c1d42c3ab5f54e1d8104\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:45.234843 containerd[1515]: time="2026-03-14T00:15:45.234810045Z" level=info msg="CreateContainer within sandbox \"62a03221c49709ded37bd4e9b55a08d793c3855c27a8c1d42c3ab5f54e1d8104\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fa56fe6758a2d92f6fbcd0b1d0b004072846eb4abbefc4e6d02f8d550e51e6b\"" Mar 14 00:15:45.236220 containerd[1515]: time="2026-03-14T00:15:45.236122961Z" level=info msg="StartContainer for \"8fa56fe6758a2d92f6fbcd0b1d0b004072846eb4abbefc4e6d02f8d550e51e6b\"" Mar 14 00:15:45.248826 systemd[1]: Started cri-containerd-0913d4e5add3aea0e2fa62266e1d95c2ce2e36ab6f8f13f87c0e767c5c7dee76.scope - libcontainer container 0913d4e5add3aea0e2fa62266e1d95c2ce2e36ab6f8f13f87c0e767c5c7dee76. Mar 14 00:15:45.269824 systemd[1]: Started cri-containerd-8fa56fe6758a2d92f6fbcd0b1d0b004072846eb4abbefc4e6d02f8d550e51e6b.scope - libcontainer container 8fa56fe6758a2d92f6fbcd0b1d0b004072846eb4abbefc4e6d02f8d550e51e6b. Mar 14 00:15:45.275279 containerd[1515]: time="2026-03-14T00:15:45.275251744Z" level=info msg="StartContainer for \"0913d4e5add3aea0e2fa62266e1d95c2ce2e36ab6f8f13f87c0e767c5c7dee76\" returns successfully" Mar 14 00:15:45.299450 containerd[1515]: time="2026-03-14T00:15:45.299407608Z" level=info msg="StartContainer for \"8fa56fe6758a2d92f6fbcd0b1d0b004072846eb4abbefc4e6d02f8d550e51e6b\" returns successfully" Mar 14 00:15:45.776156 kubelet[2575]: I0314 00:15:45.773903 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lrzl6" podStartSLOduration=23.773884631 podStartE2EDuration="23.773884631s" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:45.770961156 +0000 UTC m=+30.200906830" watchObservedRunningTime="2026-03-14 00:15:45.773884631 +0000 UTC m=+30.203830296" Mar 14 00:17:35.680690 systemd[1]: Started sshd@7-204.168.141.220:22-68.220.241.50:44034.service - OpenSSH per-connection server daemon (68.220.241.50:44034). Mar 14 00:17:36.426981 sshd[3984]: Accepted publickey for core from 68.220.241.50 port 44034 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:36.428865 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:36.433447 systemd-logind[1485]: New session 8 of user core. Mar 14 00:17:36.441961 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:17:37.026512 sshd[3984]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:37.034246 systemd[1]: sshd@7-204.168.141.220:22-68.220.241.50:44034.service: Deactivated successfully. Mar 14 00:17:37.039368 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:17:37.041550 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:17:37.044225 systemd-logind[1485]: Removed session 8. Mar 14 00:17:42.154620 systemd[1]: Started sshd@8-204.168.141.220:22-68.220.241.50:44036.service - OpenSSH per-connection server daemon (68.220.241.50:44036). Mar 14 00:17:42.909945 sshd[3999]: Accepted publickey for core from 68.220.241.50 port 44036 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:42.912838 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:42.918870 systemd-logind[1485]: New session 9 of user core. Mar 14 00:17:42.924918 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:17:43.502011 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:43.510503 systemd[1]: sshd@8-204.168.141.220:22-68.220.241.50:44036.service: Deactivated successfully. Mar 14 00:17:43.515382 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:43.519797 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:43.521484 systemd-logind[1485]: Removed session 9. Mar 14 00:17:48.638119 systemd[1]: Started sshd@9-204.168.141.220:22-68.220.241.50:34386.service - OpenSSH per-connection server daemon (68.220.241.50:34386). Mar 14 00:17:49.401647 sshd[4014]: Accepted publickey for core from 68.220.241.50 port 34386 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:49.402220 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:49.408447 systemd-logind[1485]: New session 10 of user core. Mar 14 00:17:49.414851 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:17:50.021060 sshd[4014]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:50.026217 systemd[1]: sshd@9-204.168.141.220:22-68.220.241.50:34386.service: Deactivated successfully. Mar 14 00:17:50.030141 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:17:50.032860 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:17:50.035173 systemd-logind[1485]: Removed session 10. Mar 14 00:17:50.150631 systemd[1]: Started sshd@10-204.168.141.220:22-68.220.241.50:34388.service - OpenSSH per-connection server daemon (68.220.241.50:34388). Mar 14 00:17:50.888528 sshd[4027]: Accepted publickey for core from 68.220.241.50 port 34388 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:50.891759 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:50.899153 systemd-logind[1485]: New session 11 of user core. Mar 14 00:17:50.905930 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:17:51.521884 sshd[4027]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:51.529964 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:17:51.531147 systemd[1]: sshd@10-204.168.141.220:22-68.220.241.50:34388.service: Deactivated successfully. Mar 14 00:17:51.535141 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:17:51.537221 systemd-logind[1485]: Removed session 11. Mar 14 00:17:51.660015 systemd[1]: Started sshd@11-204.168.141.220:22-68.220.241.50:34396.service - OpenSSH per-connection server daemon (68.220.241.50:34396). Mar 14 00:17:52.401003 sshd[4038]: Accepted publickey for core from 68.220.241.50 port 34396 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:52.404090 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:52.412417 systemd-logind[1485]: New session 12 of user core. Mar 14 00:17:52.417090 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:17:53.014465 sshd[4038]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:53.021592 systemd[1]: sshd@11-204.168.141.220:22-68.220.241.50:34396.service: Deactivated successfully. Mar 14 00:17:53.025540 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:17:53.028682 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:17:53.031410 systemd-logind[1485]: Removed session 12. Mar 14 00:17:58.150385 systemd[1]: Started sshd@12-204.168.141.220:22-68.220.241.50:45390.service - OpenSSH per-connection server daemon (68.220.241.50:45390). Mar 14 00:17:58.894749 sshd[4053]: Accepted publickey for core from 68.220.241.50 port 45390 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:17:58.897852 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:58.905804 systemd-logind[1485]: New session 13 of user core. Mar 14 00:17:58.913995 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:17:59.493275 sshd[4053]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:59.501188 systemd[1]: sshd@12-204.168.141.220:22-68.220.241.50:45390.service: Deactivated successfully. Mar 14 00:17:59.505291 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:17:59.506597 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:17:59.508691 systemd-logind[1485]: Removed session 13. Mar 14 00:17:59.631124 systemd[1]: Started sshd@13-204.168.141.220:22-68.220.241.50:45404.service - OpenSSH per-connection server daemon (68.220.241.50:45404). Mar 14 00:18:00.391378 sshd[4066]: Accepted publickey for core from 68.220.241.50 port 45404 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:00.394383 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:00.402829 systemd-logind[1485]: New session 14 of user core. Mar 14 00:18:00.407920 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:18:01.008076 sshd[4066]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:01.013677 systemd[1]: sshd@13-204.168.141.220:22-68.220.241.50:45404.service: Deactivated successfully. Mar 14 00:18:01.017198 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:18:01.020917 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:18:01.023230 systemd-logind[1485]: Removed session 14. Mar 14 00:18:01.147167 systemd[1]: Started sshd@14-204.168.141.220:22-68.220.241.50:45418.service - OpenSSH per-connection server daemon (68.220.241.50:45418). Mar 14 00:18:01.908744 sshd[4077]: Accepted publickey for core from 68.220.241.50 port 45418 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:01.909952 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:01.916634 systemd-logind[1485]: New session 15 of user core. Mar 14 00:18:01.919890 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:18:02.905572 sshd[4077]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:02.911679 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:18:02.913259 systemd[1]: sshd@14-204.168.141.220:22-68.220.241.50:45418.service: Deactivated successfully. Mar 14 00:18:02.917174 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:18:02.919433 systemd-logind[1485]: Removed session 15. Mar 14 00:18:03.042117 systemd[1]: Started sshd@15-204.168.141.220:22-68.220.241.50:60352.service - OpenSSH per-connection server daemon (68.220.241.50:60352). Mar 14 00:18:03.800982 sshd[4095]: Accepted publickey for core from 68.220.241.50 port 60352 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:03.803476 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:03.810694 systemd-logind[1485]: New session 16 of user core. Mar 14 00:18:03.813845 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:18:04.478748 sshd[4095]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:04.482374 systemd[1]: sshd@15-204.168.141.220:22-68.220.241.50:60352.service: Deactivated successfully. Mar 14 00:18:04.484795 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:18:04.486673 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:18:04.488336 systemd-logind[1485]: Removed session 16. Mar 14 00:18:04.615239 systemd[1]: Started sshd@16-204.168.141.220:22-68.220.241.50:60360.service - OpenSSH per-connection server daemon (68.220.241.50:60360). Mar 14 00:18:05.373933 sshd[4106]: Accepted publickey for core from 68.220.241.50 port 60360 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:05.376181 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:05.382058 systemd-logind[1485]: New session 17 of user core. Mar 14 00:18:05.395937 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:18:05.978575 sshd[4106]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:05.983513 systemd[1]: sshd@16-204.168.141.220:22-68.220.241.50:60360.service: Deactivated successfully. Mar 14 00:18:05.985576 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:18:05.987230 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:18:05.989078 systemd-logind[1485]: Removed session 17. Mar 14 00:18:11.118212 systemd[1]: Started sshd@17-204.168.141.220:22-68.220.241.50:60364.service - OpenSSH per-connection server daemon (68.220.241.50:60364). Mar 14 00:18:11.868829 sshd[4121]: Accepted publickey for core from 68.220.241.50 port 60364 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:11.871369 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:11.878074 systemd-logind[1485]: New session 18 of user core. Mar 14 00:18:11.887965 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:18:12.463038 sshd[4121]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:12.468465 systemd[1]: sshd@17-204.168.141.220:22-68.220.241.50:60364.service: Deactivated successfully. Mar 14 00:18:12.472261 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:18:12.473840 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:18:12.476150 systemd-logind[1485]: Removed session 18. Mar 14 00:18:17.605206 systemd[1]: Started sshd@18-204.168.141.220:22-68.220.241.50:39994.service - OpenSSH per-connection server daemon (68.220.241.50:39994). Mar 14 00:18:18.369476 sshd[4137]: Accepted publickey for core from 68.220.241.50 port 39994 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:18.372580 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:18.380485 systemd-logind[1485]: New session 19 of user core. Mar 14 00:18:18.384924 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:18:18.975033 sshd[4137]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:18.982503 systemd[1]: sshd@18-204.168.141.220:22-68.220.241.50:39994.service: Deactivated successfully. Mar 14 00:18:18.986559 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:18:18.987886 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:18:18.989951 systemd-logind[1485]: Removed session 19. Mar 14 00:18:19.114132 systemd[1]: Started sshd@19-204.168.141.220:22-68.220.241.50:40002.service - OpenSSH per-connection server daemon (68.220.241.50:40002). Mar 14 00:18:19.865043 sshd[4150]: Accepted publickey for core from 68.220.241.50 port 40002 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:19.867877 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:19.876458 systemd-logind[1485]: New session 20 of user core. Mar 14 00:18:19.885971 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:18:21.568322 kubelet[2575]: I0314 00:18:21.566889 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kx9jk" podStartSLOduration=179.566867372 podStartE2EDuration="2m59.566867372s" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:45.825629317 +0000 UTC m=+30.255574951" watchObservedRunningTime="2026-03-14 00:18:21.566867372 +0000 UTC m=+185.996813036" Mar 14 00:18:21.595312 containerd[1515]: time="2026-03-14T00:18:21.595093768Z" level=info msg="StopContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" with timeout 30 (s)" Mar 14 00:18:21.597511 containerd[1515]: time="2026-03-14T00:18:21.597034511Z" level=info msg="Stop container \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" with signal terminated" Mar 14 00:18:21.616383 systemd[1]: cri-containerd-16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd.scope: Deactivated successfully. Mar 14 00:18:21.634918 containerd[1515]: time="2026-03-14T00:18:21.634880213Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:18:21.642195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd-rootfs.mount: Deactivated successfully. Mar 14 00:18:21.644437 containerd[1515]: time="2026-03-14T00:18:21.644392089Z" level=info msg="StopContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" with timeout 2 (s)" Mar 14 00:18:21.644657 containerd[1515]: time="2026-03-14T00:18:21.644632668Z" level=info msg="Stop container \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" with signal terminated" Mar 14 00:18:21.655436 systemd-networkd[1419]: lxc_health: Link DOWN Mar 14 00:18:21.655443 systemd-networkd[1419]: lxc_health: Lost carrier Mar 14 00:18:21.665977 containerd[1515]: time="2026-03-14T00:18:21.665841059Z" level=info msg="shim disconnected" id=16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd namespace=k8s.io Mar 14 00:18:21.665977 containerd[1515]: time="2026-03-14T00:18:21.665912064Z" level=warning msg="cleaning up after shim disconnected" id=16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd namespace=k8s.io Mar 14 00:18:21.665977 containerd[1515]: time="2026-03-14T00:18:21.665923892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:21.675688 systemd[1]: cri-containerd-15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e.scope: Deactivated successfully. Mar 14 00:18:21.675957 systemd[1]: cri-containerd-15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e.scope: Consumed 5.793s CPU time. Mar 14 00:18:21.682876 containerd[1515]: time="2026-03-14T00:18:21.682831710Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:18:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:18:21.686057 containerd[1515]: time="2026-03-14T00:18:21.686018175Z" level=info msg="StopContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" returns successfully" Mar 14 00:18:21.686865 containerd[1515]: time="2026-03-14T00:18:21.686842438Z" level=info msg="StopPodSandbox for \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\"" Mar 14 00:18:21.687572 containerd[1515]: time="2026-03-14T00:18:21.687454775Z" level=info msg="Container to stop \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.689043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5-shm.mount: Deactivated successfully. Mar 14 00:18:21.699078 systemd[1]: cri-containerd-3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5.scope: Deactivated successfully. Mar 14 00:18:21.712278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e-rootfs.mount: Deactivated successfully. Mar 14 00:18:21.720567 containerd[1515]: time="2026-03-14T00:18:21.720505255Z" level=info msg="shim disconnected" id=15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e namespace=k8s.io Mar 14 00:18:21.720833 containerd[1515]: time="2026-03-14T00:18:21.720814407Z" level=warning msg="cleaning up after shim disconnected" id=15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e namespace=k8s.io Mar 14 00:18:21.720885 containerd[1515]: time="2026-03-14T00:18:21.720873856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:21.736036 containerd[1515]: time="2026-03-14T00:18:21.735955422Z" level=info msg="shim disconnected" id=3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5 namespace=k8s.io Mar 14 00:18:21.736036 containerd[1515]: time="2026-03-14T00:18:21.736026137Z" level=warning msg="cleaning up after shim disconnected" id=3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5 namespace=k8s.io Mar 14 00:18:21.736036 containerd[1515]: time="2026-03-14T00:18:21.736033789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:21.746468 containerd[1515]: time="2026-03-14T00:18:21.746352011Z" level=info msg="StopContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" returns successfully" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746794022Z" level=info msg="StopPodSandbox for \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\"" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746822425Z" level=info msg="Container to stop \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746834052Z" level=info msg="Container to stop \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746843887Z" level=info msg="Container to stop \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746853702Z" level=info msg="Container to stop \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.746959 containerd[1515]: time="2026-03-14T00:18:21.746865890Z" level=info msg="Container to stop \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:21.752747 containerd[1515]: time="2026-03-14T00:18:21.752629790Z" level=info msg="TearDown network for sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" successfully" Mar 14 00:18:21.752747 containerd[1515]: time="2026-03-14T00:18:21.752650932Z" level=info msg="StopPodSandbox for \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" returns successfully" Mar 14 00:18:21.756063 systemd[1]: cri-containerd-718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe.scope: Deactivated successfully. Mar 14 00:18:21.779225 containerd[1515]: time="2026-03-14T00:18:21.779119919Z" level=info msg="shim disconnected" id=718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe namespace=k8s.io Mar 14 00:18:21.779225 containerd[1515]: time="2026-03-14T00:18:21.779172749Z" level=warning msg="cleaning up after shim disconnected" id=718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe namespace=k8s.io Mar 14 00:18:21.779225 containerd[1515]: time="2026-03-14T00:18:21.779181051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:21.790615 containerd[1515]: time="2026-03-14T00:18:21.790571116Z" level=info msg="TearDown network for sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" successfully" Mar 14 00:18:21.790615 containerd[1515]: time="2026-03-14T00:18:21.790601151Z" level=info msg="StopPodSandbox for \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" returns successfully" Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907180 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhj9z\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-kube-api-access-qhj9z\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907246 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-cgroup\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907275 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmths\" (UniqueName: \"kubernetes.io/projected/3f93cf3e-e6ef-4e01-a82a-c60413993d66-kube-api-access-pmths\") pod \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\" (UID: \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\") " Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907296 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-run\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907318 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-bpf-maps\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.908774 kubelet[2575]: I0314 00:18:21.907340 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cni-path\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907365 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f93cf3e-e6ef-4e01-a82a-c60413993d66-cilium-config-path\") pod \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\" (UID: \"3f93cf3e-e6ef-4e01-a82a-c60413993d66\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907388 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-xtables-lock\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907411 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-config-path\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907434 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hubble-tls\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907460 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hostproc\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909290 kubelet[2575]: I0314 00:18:21.907481 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-lib-modules\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909527 kubelet[2575]: I0314 00:18:21.907510 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-net\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909527 kubelet[2575]: I0314 00:18:21.907537 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-kernel\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909527 kubelet[2575]: I0314 00:18:21.907559 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-etc-cni-netd\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909527 kubelet[2575]: I0314 00:18:21.907586 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-clustermesh-secrets\") pod \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\" (UID: \"b8472b5a-90cc-4011-b9f9-579b6f6b71e9\") " Mar 14 00:18:21.909527 kubelet[2575]: I0314 00:18:21.908317 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.910994 kubelet[2575]: I0314 00:18:21.910477 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.914948 kubelet[2575]: I0314 00:18:21.914912 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.915234 kubelet[2575]: I0314 00:18:21.915158 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.915402 kubelet[2575]: I0314 00:18:21.915334 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.917607 kubelet[2575]: I0314 00:18:21.916816 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.917607 kubelet[2575]: I0314 00:18:21.916916 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.917607 kubelet[2575]: I0314 00:18:21.916955 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.917607 kubelet[2575]: I0314 00:18:21.916985 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.917607 kubelet[2575]: I0314 00:18:21.917015 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:21.925852 kubelet[2575]: I0314 00:18:21.925000 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-kube-api-access-qhj9z" (OuterVolumeSpecName: "kube-api-access-qhj9z") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "kube-api-access-qhj9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:21.925852 kubelet[2575]: I0314 00:18:21.925163 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f93cf3e-e6ef-4e01-a82a-c60413993d66-kube-api-access-pmths" (OuterVolumeSpecName: "kube-api-access-pmths") pod "3f93cf3e-e6ef-4e01-a82a-c60413993d66" (UID: "3f93cf3e-e6ef-4e01-a82a-c60413993d66"). InnerVolumeSpecName "kube-api-access-pmths". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:21.925852 kubelet[2575]: I0314 00:18:21.925251 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:18:21.929042 kubelet[2575]: I0314 00:18:21.928985 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f93cf3e-e6ef-4e01-a82a-c60413993d66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f93cf3e-e6ef-4e01-a82a-c60413993d66" (UID: "3f93cf3e-e6ef-4e01-a82a-c60413993d66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:21.931105 kubelet[2575]: I0314 00:18:21.930881 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:21.931476 kubelet[2575]: I0314 00:18:21.931434 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b8472b5a-90cc-4011-b9f9-579b6f6b71e9" (UID: "b8472b5a-90cc-4011-b9f9-579b6f6b71e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:22.008879 kubelet[2575]: I0314 00:18:22.008810 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-cgroup\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.008877 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmths\" (UniqueName: \"kubernetes.io/projected/3f93cf3e-e6ef-4e01-a82a-c60413993d66-kube-api-access-pmths\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.008923 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-run\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.008947 2575 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-bpf-maps\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.008971 2575 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cni-path\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.008992 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f93cf3e-e6ef-4e01-a82a-c60413993d66-cilium-config-path\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.009013 2575 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-xtables-lock\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.009037 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-cilium-config-path\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009267 kubelet[2575]: I0314 00:18:22.009061 2575 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hubble-tls\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009231 2575 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-hostproc\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009252 2575 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-lib-modules\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009275 2575 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-net\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009302 2575 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009332 2575 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-etc-cni-netd\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009354 2575 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-clustermesh-secrets\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.009649 kubelet[2575]: I0314 00:18:22.009378 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhj9z\" (UniqueName: \"kubernetes.io/projected/b8472b5a-90cc-4011-b9f9-579b6f6b71e9-kube-api-access-qhj9z\") on node \"ci-4081-3-6-n-e97f419eb8\" DevicePath \"\"" Mar 14 00:18:22.082411 kubelet[2575]: I0314 00:18:22.081910 2575 scope.go:117] "RemoveContainer" containerID="16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd" Mar 14 00:18:22.085603 containerd[1515]: time="2026-03-14T00:18:22.085562780Z" level=info msg="RemoveContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\"" Mar 14 00:18:22.094842 containerd[1515]: time="2026-03-14T00:18:22.093991888Z" level=info msg="RemoveContainer for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" returns successfully" Mar 14 00:18:22.098257 kubelet[2575]: I0314 00:18:22.097050 2575 scope.go:117] "RemoveContainer" containerID="16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd" Mar 14 00:18:22.098377 containerd[1515]: time="2026-03-14T00:18:22.098079932Z" level=error msg="ContainerStatus for \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\": not found" Mar 14 00:18:22.100036 kubelet[2575]: E0314 00:18:22.099975 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\": not found" containerID="16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd" Mar 14 00:18:22.100169 kubelet[2575]: I0314 00:18:22.100065 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd"} err="failed to get container status \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\": rpc error: code = NotFound desc = an error occurred when try to find container \"16b5af9080fb6e52279a4233e2f629b5116fc836477bb268443a7015eaff2efd\": not found" Mar 14 00:18:22.100169 kubelet[2575]: I0314 00:18:22.100107 2575 scope.go:117] "RemoveContainer" containerID="15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e" Mar 14 00:18:22.103254 containerd[1515]: time="2026-03-14T00:18:22.102834684Z" level=info msg="RemoveContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\"" Mar 14 00:18:22.104347 systemd[1]: Removed slice kubepods-besteffort-pod3f93cf3e_e6ef_4e01_a82a_c60413993d66.slice - libcontainer container kubepods-besteffort-pod3f93cf3e_e6ef_4e01_a82a_c60413993d66.slice. Mar 14 00:18:22.114328 containerd[1515]: time="2026-03-14T00:18:22.114259961Z" level=info msg="RemoveContainer for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" returns successfully" Mar 14 00:18:22.118353 kubelet[2575]: I0314 00:18:22.115974 2575 scope.go:117] "RemoveContainer" containerID="9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826" Mar 14 00:18:22.117877 systemd[1]: Removed slice kubepods-burstable-podb8472b5a_90cc_4011_b9f9_579b6f6b71e9.slice - libcontainer container kubepods-burstable-podb8472b5a_90cc_4011_b9f9_579b6f6b71e9.slice. Mar 14 00:18:22.120108 containerd[1515]: time="2026-03-14T00:18:22.117603272Z" level=info msg="RemoveContainer for \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\"" Mar 14 00:18:22.118018 systemd[1]: kubepods-burstable-podb8472b5a_90cc_4011_b9f9_579b6f6b71e9.slice: Consumed 5.866s CPU time. Mar 14 00:18:22.127630 containerd[1515]: time="2026-03-14T00:18:22.127564661Z" level=info msg="RemoveContainer for \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\" returns successfully" Mar 14 00:18:22.128608 kubelet[2575]: I0314 00:18:22.128562 2575 scope.go:117] "RemoveContainer" containerID="b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37" Mar 14 00:18:22.131581 containerd[1515]: time="2026-03-14T00:18:22.131390663Z" level=info msg="RemoveContainer for \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\"" Mar 14 00:18:22.136039 containerd[1515]: time="2026-03-14T00:18:22.135986556Z" level=info msg="RemoveContainer for \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\" returns successfully" Mar 14 00:18:22.136363 kubelet[2575]: I0314 00:18:22.136327 2575 scope.go:117] "RemoveContainer" containerID="0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1" Mar 14 00:18:22.140468 containerd[1515]: time="2026-03-14T00:18:22.140396302Z" level=info msg="RemoveContainer for \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\"" Mar 14 00:18:22.146422 containerd[1515]: time="2026-03-14T00:18:22.146351098Z" level=info msg="RemoveContainer for \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\" returns successfully" Mar 14 00:18:22.146642 kubelet[2575]: I0314 00:18:22.146558 2575 scope.go:117] "RemoveContainer" containerID="90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7" Mar 14 00:18:22.147903 containerd[1515]: time="2026-03-14T00:18:22.147864172Z" level=info msg="RemoveContainer for \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\"" Mar 14 00:18:22.156255 containerd[1515]: time="2026-03-14T00:18:22.155827041Z" level=info msg="RemoveContainer for \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\" returns successfully" Mar 14 00:18:22.157931 kubelet[2575]: I0314 00:18:22.157810 2575 scope.go:117] "RemoveContainer" containerID="15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e" Mar 14 00:18:22.158662 containerd[1515]: time="2026-03-14T00:18:22.158325698Z" level=error msg="ContainerStatus for \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\": not found" Mar 14 00:18:22.160837 kubelet[2575]: E0314 00:18:22.158534 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\": not found" containerID="15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e" Mar 14 00:18:22.160837 kubelet[2575]: I0314 00:18:22.158561 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e"} err="failed to get container status \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\": rpc error: code = NotFound desc = an error occurred when try to find container \"15b36f9980aef25d51c352b2725a2b03f5062af8e044d355cda44c224fd2a36e\": not found" Mar 14 00:18:22.160837 kubelet[2575]: I0314 00:18:22.158584 2575 scope.go:117] "RemoveContainer" containerID="9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826" Mar 14 00:18:22.161082 containerd[1515]: time="2026-03-14T00:18:22.159418413Z" level=error msg="ContainerStatus for \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\": not found" Mar 14 00:18:22.161936 kubelet[2575]: E0314 00:18:22.161819 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\": not found" containerID="9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826" Mar 14 00:18:22.161936 kubelet[2575]: I0314 00:18:22.161852 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826"} err="failed to get container status \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\": rpc error: code = NotFound desc = an error occurred when try to find container \"9742e0d5bc69fc6c7af663b2d5d8761f42b6c2ecdc460aac0fe74f93bdf7a826\": not found" Mar 14 00:18:22.161936 kubelet[2575]: I0314 00:18:22.161872 2575 scope.go:117] "RemoveContainer" containerID="b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37" Mar 14 00:18:22.164590 containerd[1515]: time="2026-03-14T00:18:22.163594268Z" level=error msg="ContainerStatus for \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\": not found" Mar 14 00:18:22.164804 kubelet[2575]: E0314 00:18:22.164726 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\": not found" containerID="b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37" Mar 14 00:18:22.164804 kubelet[2575]: I0314 00:18:22.164746 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37"} err="failed to get container status \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9c7e3f734c6cf00c4a3221f0961fad871bba59b87a16856f1f46fbbf7d28b37\": not found" Mar 14 00:18:22.164804 kubelet[2575]: I0314 00:18:22.164764 2575 scope.go:117] "RemoveContainer" containerID="0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1" Mar 14 00:18:22.164976 containerd[1515]: time="2026-03-14T00:18:22.164928534Z" level=error msg="ContainerStatus for \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\": not found" Mar 14 00:18:22.165046 kubelet[2575]: E0314 00:18:22.165029 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\": not found" containerID="0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1" Mar 14 00:18:22.165067 kubelet[2575]: I0314 00:18:22.165049 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1"} err="failed to get container status \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0565441a93e8daae42945a7c134640f25c9270485eb0dfd8af478a8058a8f0d1\": not found" Mar 14 00:18:22.165067 kubelet[2575]: I0314 00:18:22.165059 2575 scope.go:117] "RemoveContainer" containerID="90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7" Mar 14 00:18:22.165291 containerd[1515]: time="2026-03-14T00:18:22.165272748Z" level=error msg="ContainerStatus for \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\": not found" Mar 14 00:18:22.165408 kubelet[2575]: E0314 00:18:22.165396 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\": not found" containerID="90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7" Mar 14 00:18:22.165514 kubelet[2575]: I0314 00:18:22.165501 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7"} err="failed to get container status \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b10bcfba7e04c9a9b04ac1ac4efc117a6343fea76e24279293a75e98055ca7\": not found" Mar 14 00:18:22.615619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5-rootfs.mount: Deactivated successfully. Mar 14 00:18:22.615846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe-rootfs.mount: Deactivated successfully. Mar 14 00:18:22.615978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe-shm.mount: Deactivated successfully. Mar 14 00:18:22.616112 systemd[1]: var-lib-kubelet-pods-3f93cf3e\x2de6ef\x2d4e01\x2da82a\x2dc60413993d66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmths.mount: Deactivated successfully. Mar 14 00:18:22.616271 systemd[1]: var-lib-kubelet-pods-b8472b5a\x2d90cc\x2d4011\x2db9f9\x2d579b6f6b71e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:18:22.616417 systemd[1]: var-lib-kubelet-pods-b8472b5a\x2d90cc\x2d4011\x2db9f9\x2d579b6f6b71e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqhj9z.mount: Deactivated successfully. Mar 14 00:18:22.616548 systemd[1]: var-lib-kubelet-pods-b8472b5a\x2d90cc\x2d4011\x2db9f9\x2d579b6f6b71e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:18:23.645378 sshd[4150]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:23.656863 kubelet[2575]: I0314 00:18:23.656808 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f93cf3e-e6ef-4e01-a82a-c60413993d66" path="/var/lib/kubelet/pods/3f93cf3e-e6ef-4e01-a82a-c60413993d66/volumes" Mar 14 00:18:23.658334 kubelet[2575]: I0314 00:18:23.657951 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8472b5a-90cc-4011-b9f9-579b6f6b71e9" path="/var/lib/kubelet/pods/b8472b5a-90cc-4011-b9f9-579b6f6b71e9/volumes" Mar 14 00:18:23.659249 systemd[1]: sshd@19-204.168.141.220:22-68.220.241.50:40002.service: Deactivated successfully. Mar 14 00:18:23.663310 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:18:23.664905 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:18:23.666892 systemd-logind[1485]: Removed session 20. Mar 14 00:18:23.781652 systemd[1]: Started sshd@20-204.168.141.220:22-68.220.241.50:44244.service - OpenSSH per-connection server daemon (68.220.241.50:44244). Mar 14 00:18:24.536767 sshd[4321]: Accepted publickey for core from 68.220.241.50 port 44244 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:24.539005 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:24.547304 systemd-logind[1485]: New session 21 of user core. Mar 14 00:18:24.553939 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:18:25.559617 systemd[1]: Created slice kubepods-burstable-pod6041e46d_baed_40a6_93f9_ee1d8cbeebec.slice - libcontainer container kubepods-burstable-pod6041e46d_baed_40a6_93f9_ee1d8cbeebec.slice. Mar 14 00:18:25.684544 sshd[4321]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:25.687905 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:18:25.690854 systemd[1]: sshd@20-204.168.141.220:22-68.220.241.50:44244.service: Deactivated successfully. Mar 14 00:18:25.693601 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:18:25.694666 systemd-logind[1485]: Removed session 21. Mar 14 00:18:25.731267 kubelet[2575]: I0314 00:18:25.731162 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-bpf-maps\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.731267 kubelet[2575]: I0314 00:18:25.731236 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-lib-modules\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.731267 kubelet[2575]: I0314 00:18:25.731270 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6041e46d-baed-40a6-93f9-ee1d8cbeebec-cilium-ipsec-secrets\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731295 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-host-proc-sys-net\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731321 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6041e46d-baed-40a6-93f9-ee1d8cbeebec-clustermesh-secrets\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731345 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x99d\" (UniqueName: \"kubernetes.io/projected/6041e46d-baed-40a6-93f9-ee1d8cbeebec-kube-api-access-8x99d\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731369 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-etc-cni-netd\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731391 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6041e46d-baed-40a6-93f9-ee1d8cbeebec-hubble-tls\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732059 kubelet[2575]: I0314 00:18:25.731412 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-hostproc\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731432 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-cilium-cgroup\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731454 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-xtables-lock\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731477 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6041e46d-baed-40a6-93f9-ee1d8cbeebec-cilium-config-path\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731499 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-cilium-run\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731521 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-cni-path\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.732310 kubelet[2575]: I0314 00:18:25.731546 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6041e46d-baed-40a6-93f9-ee1d8cbeebec-host-proc-sys-kernel\") pod \"cilium-67xjb\" (UID: \"6041e46d-baed-40a6-93f9-ee1d8cbeebec\") " pod="kube-system/cilium-67xjb" Mar 14 00:18:25.740077 kubelet[2575]: E0314 00:18:25.740025 2575 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:18:25.821642 systemd[1]: Started sshd@21-204.168.141.220:22-68.220.241.50:44252.service - OpenSSH per-connection server daemon (68.220.241.50:44252). Mar 14 00:18:26.168008 containerd[1515]: time="2026-03-14T00:18:26.167834237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67xjb,Uid:6041e46d-baed-40a6-93f9-ee1d8cbeebec,Namespace:kube-system,Attempt:0,}" Mar 14 00:18:26.215767 containerd[1515]: time="2026-03-14T00:18:26.215285313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:26.215767 containerd[1515]: time="2026-03-14T00:18:26.215356699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:26.215767 containerd[1515]: time="2026-03-14T00:18:26.215375267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:26.215767 containerd[1515]: time="2026-03-14T00:18:26.215505943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:26.243926 systemd[1]: Started cri-containerd-df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192.scope - libcontainer container df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192. Mar 14 00:18:26.283639 containerd[1515]: time="2026-03-14T00:18:26.283593407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67xjb,Uid:6041e46d-baed-40a6-93f9-ee1d8cbeebec,Namespace:kube-system,Attempt:0,} returns sandbox id \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\"" Mar 14 00:18:26.288141 containerd[1515]: time="2026-03-14T00:18:26.288034549Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:18:26.301247 containerd[1515]: time="2026-03-14T00:18:26.301194573Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2\"" Mar 14 00:18:26.302657 containerd[1515]: time="2026-03-14T00:18:26.302042532Z" level=info msg="StartContainer for \"387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2\"" Mar 14 00:18:26.331834 systemd[1]: Started cri-containerd-387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2.scope - libcontainer container 387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2. Mar 14 00:18:26.354551 containerd[1515]: time="2026-03-14T00:18:26.354515870Z" level=info msg="StartContainer for \"387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2\" returns successfully" Mar 14 00:18:26.363281 systemd[1]: cri-containerd-387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2.scope: Deactivated successfully. Mar 14 00:18:26.396323 containerd[1515]: time="2026-03-14T00:18:26.396261923Z" level=info msg="shim disconnected" id=387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2 namespace=k8s.io Mar 14 00:18:26.396323 containerd[1515]: time="2026-03-14T00:18:26.396307812Z" level=warning msg="cleaning up after shim disconnected" id=387d9c6cc12844be3ae988d191da320be0d974ee6f70949a335722db5e0cd4f2 namespace=k8s.io Mar 14 00:18:26.396323 containerd[1515]: time="2026-03-14T00:18:26.396333650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:26.587525 sshd[4333]: Accepted publickey for core from 68.220.241.50 port 44252 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:26.590408 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:26.598279 systemd-logind[1485]: New session 22 of user core. Mar 14 00:18:26.606034 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:18:27.107823 sshd[4333]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:27.119630 systemd[1]: sshd@21-204.168.141.220:22-68.220.241.50:44252.service: Deactivated successfully. Mar 14 00:18:27.120157 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:18:27.124919 containerd[1515]: time="2026-03-14T00:18:27.124861347Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:18:27.127936 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:18:27.130659 systemd-logind[1485]: Removed session 22. Mar 14 00:18:27.154569 containerd[1515]: time="2026-03-14T00:18:27.154060884Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09\"" Mar 14 00:18:27.156908 containerd[1515]: time="2026-03-14T00:18:27.156864207Z" level=info msg="StartContainer for \"06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09\"" Mar 14 00:18:27.199843 systemd[1]: Started cri-containerd-06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09.scope - libcontainer container 06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09. Mar 14 00:18:27.221510 containerd[1515]: time="2026-03-14T00:18:27.221378778Z" level=info msg="StartContainer for \"06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09\" returns successfully" Mar 14 00:18:27.234916 systemd[1]: Started sshd@22-204.168.141.220:22-68.220.241.50:44258.service - OpenSSH per-connection server daemon (68.220.241.50:44258). Mar 14 00:18:27.235144 systemd[1]: cri-containerd-06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09.scope: Deactivated successfully. Mar 14 00:18:27.258369 containerd[1515]: time="2026-03-14T00:18:27.258298012Z" level=info msg="shim disconnected" id=06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09 namespace=k8s.io Mar 14 00:18:27.258369 containerd[1515]: time="2026-03-14T00:18:27.258343861Z" level=warning msg="cleaning up after shim disconnected" id=06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09 namespace=k8s.io Mar 14 00:18:27.258369 containerd[1515]: time="2026-03-14T00:18:27.258350521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:27.841333 systemd[1]: run-containerd-runc-k8s.io-06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09-runc.iqJodP.mount: Deactivated successfully. Mar 14 00:18:27.841592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fce2868b3e19688e7ceadf69c4ec423eb2f43c756ce2a734b2ac3510311c09-rootfs.mount: Deactivated successfully. Mar 14 00:18:27.981771 sshd[4484]: Accepted publickey for core from 68.220.241.50 port 44258 ssh2: RSA SHA256:9TjGOiuuK3bgtHruXDF4BkalQosgWCHmLGp/LJXoy9c Mar 14 00:18:27.983971 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:27.993400 systemd-logind[1485]: New session 23 of user core. Mar 14 00:18:28.002961 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:18:28.125960 containerd[1515]: time="2026-03-14T00:18:28.125547260Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:18:28.161639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996385043.mount: Deactivated successfully. Mar 14 00:18:28.163210 containerd[1515]: time="2026-03-14T00:18:28.163127765Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98\"" Mar 14 00:18:28.165780 containerd[1515]: time="2026-03-14T00:18:28.164490533Z" level=info msg="StartContainer for \"5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98\"" Mar 14 00:18:28.227041 systemd[1]: Started cri-containerd-5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98.scope - libcontainer container 5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98. Mar 14 00:18:28.292800 containerd[1515]: time="2026-03-14T00:18:28.292766102Z" level=info msg="StartContainer for \"5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98\" returns successfully" Mar 14 00:18:28.296930 systemd[1]: cri-containerd-5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98.scope: Deactivated successfully. Mar 14 00:18:28.324841 containerd[1515]: time="2026-03-14T00:18:28.324771137Z" level=info msg="shim disconnected" id=5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98 namespace=k8s.io Mar 14 00:18:28.324841 containerd[1515]: time="2026-03-14T00:18:28.324833751Z" level=warning msg="cleaning up after shim disconnected" id=5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98 namespace=k8s.io Mar 14 00:18:28.324841 containerd[1515]: time="2026-03-14T00:18:28.324841853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:28.841363 systemd[1]: run-containerd-runc-k8s.io-5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98-runc.4ZKFpD.mount: Deactivated successfully. Mar 14 00:18:28.841556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5635dc9df425e3c6251dc1f6b055a45ba70f325f18fb9096e85ce443c23ecb98-rootfs.mount: Deactivated successfully. Mar 14 00:18:29.064314 kubelet[2575]: I0314 00:18:29.063218 2575 setters.go:618] "Node became not ready" node="ci-4081-3-6-n-e97f419eb8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:18:29Z","lastTransitionTime":"2026-03-14T00:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:18:29.131420 containerd[1515]: time="2026-03-14T00:18:29.130450824Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:18:29.159048 containerd[1515]: time="2026-03-14T00:18:29.158960280Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123\"" Mar 14 00:18:29.166681 containerd[1515]: time="2026-03-14T00:18:29.163360101Z" level=info msg="StartContainer for \"2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123\"" Mar 14 00:18:29.199824 systemd[1]: Started cri-containerd-2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123.scope - libcontainer container 2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123. Mar 14 00:18:29.221854 systemd[1]: cri-containerd-2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123.scope: Deactivated successfully. Mar 14 00:18:29.224070 containerd[1515]: time="2026-03-14T00:18:29.223911808Z" level=info msg="StartContainer for \"2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123\" returns successfully" Mar 14 00:18:29.245970 containerd[1515]: time="2026-03-14T00:18:29.245905217Z" level=info msg="shim disconnected" id=2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123 namespace=k8s.io Mar 14 00:18:29.245970 containerd[1515]: time="2026-03-14T00:18:29.245950996Z" level=warning msg="cleaning up after shim disconnected" id=2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123 namespace=k8s.io Mar 14 00:18:29.245970 containerd[1515]: time="2026-03-14T00:18:29.245957956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:29.255887 containerd[1515]: time="2026-03-14T00:18:29.255847559Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:18:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:18:29.840631 systemd[1]: run-containerd-runc-k8s.io-2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123-runc.VD7IEn.mount: Deactivated successfully. Mar 14 00:18:29.840864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f15922f1dbf5de259bd848e8642446fcd446e6be35d09c0231d42fa1a4bc123-rootfs.mount: Deactivated successfully. Mar 14 00:18:30.135685 containerd[1515]: time="2026-03-14T00:18:30.135552791Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:18:30.158655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430011594.mount: Deactivated successfully. Mar 14 00:18:30.162074 containerd[1515]: time="2026-03-14T00:18:30.162000263Z" level=info msg="CreateContainer within sandbox \"df47730e1444ef86df12d32d6cc119bd18a124f7f909f03c4ae838390f61f192\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013\"" Mar 14 00:18:30.162753 containerd[1515]: time="2026-03-14T00:18:30.162674641Z" level=info msg="StartContainer for \"c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013\"" Mar 14 00:18:30.199986 systemd[1]: Started cri-containerd-c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013.scope - libcontainer container c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013. Mar 14 00:18:30.234201 containerd[1515]: time="2026-03-14T00:18:30.234162340Z" level=info msg="StartContainer for \"c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013\" returns successfully" Mar 14 00:18:30.576778 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:18:31.154292 kubelet[2575]: I0314 00:18:31.153885 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-67xjb" podStartSLOduration=6.153864987 podStartE2EDuration="6.153864987s" podCreationTimestamp="2026-03-14 00:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:18:31.152328879 +0000 UTC m=+195.582274543" watchObservedRunningTime="2026-03-14 00:18:31.153864987 +0000 UTC m=+195.583810651" Mar 14 00:18:33.390467 systemd-networkd[1419]: lxc_health: Link UP Mar 14 00:18:33.395882 systemd-networkd[1419]: lxc_health: Gained carrier Mar 14 00:18:34.616262 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 14 00:18:34.758401 systemd[1]: run-containerd-runc-k8s.io-c6950972b648cd69c2fbe13238754ff13137aa823c3526aaf115bda6edf32013-runc.3u6etZ.mount: Deactivated successfully. Mar 14 00:18:39.200620 sshd[4484]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:39.207576 systemd[1]: sshd@22-204.168.141.220:22-68.220.241.50:44258.service: Deactivated successfully. Mar 14 00:18:39.212989 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:18:39.216526 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:18:39.221316 systemd-logind[1485]: Removed session 23. Mar 14 00:18:56.188114 kubelet[2575]: E0314 00:18:56.187119 2575 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46870->10.0.0.2:2379: read: connection timed out" Mar 14 00:18:56.200986 systemd[1]: cri-containerd-28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc.scope: Deactivated successfully. Mar 14 00:18:56.202347 systemd[1]: cri-containerd-28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc.scope: Consumed 2.519s CPU time, 16.1M memory peak, 0B memory swap peak. Mar 14 00:18:56.248366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc-rootfs.mount: Deactivated successfully. Mar 14 00:18:56.262473 containerd[1515]: time="2026-03-14T00:18:56.262398183Z" level=info msg="shim disconnected" id=28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc namespace=k8s.io Mar 14 00:18:56.262473 containerd[1515]: time="2026-03-14T00:18:56.262467297Z" level=warning msg="cleaning up after shim disconnected" id=28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc namespace=k8s.io Mar 14 00:18:56.262473 containerd[1515]: time="2026-03-14T00:18:56.262481358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:56.687814 systemd[1]: cri-containerd-a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456.scope: Deactivated successfully. Mar 14 00:18:56.689366 systemd[1]: cri-containerd-a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456.scope: Consumed 5.304s CPU time, 17.5M memory peak, 0B memory swap peak. Mar 14 00:18:56.715637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456-rootfs.mount: Deactivated successfully. Mar 14 00:18:56.719364 containerd[1515]: time="2026-03-14T00:18:56.719289930Z" level=info msg="shim disconnected" id=a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456 namespace=k8s.io Mar 14 00:18:56.719364 containerd[1515]: time="2026-03-14T00:18:56.719358284Z" level=warning msg="cleaning up after shim disconnected" id=a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456 namespace=k8s.io Mar 14 00:18:56.719364 containerd[1515]: time="2026-03-14T00:18:56.719366056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:57.199751 kubelet[2575]: I0314 00:18:57.199637 2575 scope.go:117] "RemoveContainer" containerID="a761efe2c76ecd68e7bbc34b2ec63f862752ca27feb3a26a3814e2d26e304456" Mar 14 00:18:57.203430 containerd[1515]: time="2026-03-14T00:18:57.203349004Z" level=info msg="CreateContainer within sandbox \"03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:18:57.205995 kubelet[2575]: I0314 00:18:57.205935 2575 scope.go:117] "RemoveContainer" containerID="28015c9224340603cf9895f0326159a85bf9f18cd67cf6574a5a97738d4013bc" Mar 14 00:18:57.210535 containerd[1515]: time="2026-03-14T00:18:57.210211189Z" level=info msg="CreateContainer within sandbox \"81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:18:57.232991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556306240.mount: Deactivated successfully. Mar 14 00:18:57.238012 containerd[1515]: time="2026-03-14T00:18:57.237796646Z" level=info msg="CreateContainer within sandbox \"81caff926f2dcb73a54e1d9ece7c9937096438576d74fd43b2e922ea7c295179\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"71fc83b599ce8d6fd074cceccfe4cc5adfb9750fdec19fc3d0ce9b23f4ecde4f\"" Mar 14 00:18:57.239197 containerd[1515]: time="2026-03-14T00:18:57.238502480Z" level=info msg="CreateContainer within sandbox \"03607c1e925ca496555ebb250299141731ed4767c31f5ebb86df0b0c428b9948\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"41ccb601268fbd8dd3906d8c5e7aafb7671cf28273406504eeee8c63ceaab152\"" Mar 14 00:18:57.239197 containerd[1515]: time="2026-03-14T00:18:57.238934210Z" level=info msg="StartContainer for \"71fc83b599ce8d6fd074cceccfe4cc5adfb9750fdec19fc3d0ce9b23f4ecde4f\"" Mar 14 00:18:57.241051 containerd[1515]: time="2026-03-14T00:18:57.238936693Z" level=info msg="StartContainer for \"41ccb601268fbd8dd3906d8c5e7aafb7671cf28273406504eeee8c63ceaab152\"" Mar 14 00:18:57.285859 systemd[1]: Started cri-containerd-41ccb601268fbd8dd3906d8c5e7aafb7671cf28273406504eeee8c63ceaab152.scope - libcontainer container 41ccb601268fbd8dd3906d8c5e7aafb7671cf28273406504eeee8c63ceaab152. Mar 14 00:18:57.287172 systemd[1]: Started cri-containerd-71fc83b599ce8d6fd074cceccfe4cc5adfb9750fdec19fc3d0ce9b23f4ecde4f.scope - libcontainer container 71fc83b599ce8d6fd074cceccfe4cc5adfb9750fdec19fc3d0ce9b23f4ecde4f. Mar 14 00:18:57.333878 containerd[1515]: time="2026-03-14T00:18:57.333846020Z" level=info msg="StartContainer for \"71fc83b599ce8d6fd074cceccfe4cc5adfb9750fdec19fc3d0ce9b23f4ecde4f\" returns successfully" Mar 14 00:18:57.334341 containerd[1515]: time="2026-03-14T00:18:57.334262747Z" level=info msg="StartContainer for \"41ccb601268fbd8dd3906d8c5e7aafb7671cf28273406504eeee8c63ceaab152\" returns successfully" Mar 14 00:19:00.858041 kubelet[2575]: E0314 00:19:00.857875 2575 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46684->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-e97f419eb8.189c8d21feab507f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-e97f419eb8,UID:4ebcc93cc265a8c29434196759f63f72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e97f419eb8,},FirstTimestamp:2026-03-14 00:18:50.423210111 +0000 UTC m=+214.853155775,LastTimestamp:2026-03-14 00:18:50.423210111 +0000 UTC m=+214.853155775,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e97f419eb8,}" Mar 14 00:19:06.189439 kubelet[2575]: E0314 00:19:06.188366 2575 controller.go:195] "Failed to update lease" err="Put \"https://204.168.141.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e97f419eb8?timeout=10s\": context deadline exceeded" Mar 14 00:19:07.642077 kubelet[2575]: I0314 00:19:07.642010 2575 status_manager.go:895] "Failed to get status for pod" podUID="2fe4d689dc8c4ddd6bef0a0fdfce8ef1" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e97f419eb8" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46802->10.0.0.2:2379: read: connection timed out" Mar 14 00:19:15.646287 containerd[1515]: time="2026-03-14T00:19:15.646126118Z" level=info msg="StopPodSandbox for \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\"" Mar 14 00:19:15.647075 containerd[1515]: time="2026-03-14T00:19:15.646285117Z" level=info msg="TearDown network for sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" successfully" Mar 14 00:19:15.647075 containerd[1515]: time="2026-03-14T00:19:15.646303916Z" level=info msg="StopPodSandbox for \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" returns successfully" Mar 14 00:19:15.648344 containerd[1515]: time="2026-03-14T00:19:15.648285406Z" level=info msg="RemovePodSandbox for \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\"" Mar 14 00:19:15.648344 containerd[1515]: time="2026-03-14T00:19:15.648325866Z" level=info msg="Forcibly stopping sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\"" Mar 14 00:19:15.648489 containerd[1515]: time="2026-03-14T00:19:15.648421771Z" level=info msg="TearDown network for sandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" successfully" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.654646484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.654744100Z" level=info msg="RemovePodSandbox \"3c713a8c2b9a8ffadec7bd8d6e0eed2ee55ef238f01fc8f83c9cd3fa776990e5\" returns successfully" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.655326977Z" level=info msg="StopPodSandbox for \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\"" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.655435451Z" level=info msg="TearDown network for sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" successfully" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.655454499Z" level=info msg="StopPodSandbox for \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" returns successfully" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.656161502Z" level=info msg="RemovePodSandbox for \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\"" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.656318389Z" level=info msg="Forcibly stopping sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\"" Mar 14 00:19:15.656884 containerd[1515]: time="2026-03-14T00:19:15.656585259Z" level=info msg="TearDown network for sandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" successfully" Mar 14 00:19:15.662562 containerd[1515]: time="2026-03-14T00:19:15.662479415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:19:15.662562 containerd[1515]: time="2026-03-14T00:19:15.662550873Z" level=info msg="RemovePodSandbox \"718bf96558189ffe4dd1befa26361b972d81497730eb997e58e50c9ac35fb5fe\" returns successfully" Mar 14 00:19:16.189446 kubelet[2575]: E0314 00:19:16.189382 2575 controller.go:195] "Failed to update lease" err="Put \"https://204.168.141.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e97f419eb8?timeout=10s\": context deadline exceeded"