Mar 14 00:13:49.885884 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 14 00:13:49.885906 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:49.885916 kernel: KASLR enabled Mar 14 00:13:49.885922 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:13:49.885928 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 14 00:13:49.885933 kernel: random: crng init done Mar 14 00:13:49.885940 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:49.885946 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 14 00:13:49.885952 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:13:49.885960 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885966 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885972 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885977 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885984 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885991 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.885999 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.886006 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.886012 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:49.886018 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:13:49.886025 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 14 00:13:49.886031 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:49.886037 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:13:49.886044 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 14 00:13:49.886050 kernel: Zone ranges: Mar 14 00:13:49.886056 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:49.886064 kernel: DMA32 empty Mar 14 00:13:49.886070 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 14 00:13:49.886076 kernel: Movable zone start for each node Mar 14 00:13:49.886082 kernel: Early memory node ranges Mar 14 00:13:49.886089 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 14 00:13:49.886095 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 14 00:13:49.886102 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 14 00:13:49.886108 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 14 00:13:49.886114 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 14 00:13:49.886120 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 14 00:13:49.886126 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 14 00:13:49.886133 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:13:49.886140 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:13:49.886147 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:49.886153 kernel: psci: PSCIv1.1 detected in firmware. Mar 14 00:13:49.886162 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:49.886169 kernel: psci: Trusted OS migration not required Mar 14 00:13:49.886176 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:49.886184 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 14 00:13:49.886190 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:49.886197 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:49.886204 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:49.886211 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:49.886217 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:49.886224 kernel: CPU features: detected: Hardware dirty bit management Mar 14 00:13:49.886230 kernel: CPU features: detected: Spectre-v4 Mar 14 00:13:49.886237 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:49.886257 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 14 00:13:49.886265 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 14 00:13:49.886272 kernel: CPU features: detected: ARM erratum 1418040 Mar 14 00:13:49.886279 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 14 00:13:49.886286 kernel: alternatives: applying boot alternatives Mar 14 00:13:49.886294 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:49.886301 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:49.886308 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:49.886314 kernel: Fallback order for Node 0: 0 Mar 14 00:13:49.886321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 14 00:13:49.886328 kernel: Policy zone: Normal Mar 14 00:13:49.886334 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:49.886342 kernel: software IO TLB: area num 2. Mar 14 00:13:49.886349 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 14 00:13:49.886357 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Mar 14 00:13:49.886363 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:49.886370 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:49.886378 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:49.886385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:49.886392 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:49.886398 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:49.886405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:49.886412 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:49.886419 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:49.886427 kernel: GICv3: 256 SPIs implemented Mar 14 00:13:49.886434 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:49.886440 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:49.886447 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 14 00:13:49.886454 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 14 00:13:49.886461 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 14 00:13:49.886468 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:49.886474 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:49.886481 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 14 00:13:49.886499 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 14 00:13:49.886506 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:49.886515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:13:49.886521 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 14 00:13:49.886529 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 14 00:13:49.886536 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 14 00:13:49.886543 kernel: Console: colour dummy device 80x25 Mar 14 00:13:49.886550 kernel: ACPI: Core revision 20230628 Mar 14 00:13:49.886557 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 14 00:13:49.886564 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:49.886571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:49.886578 kernel: landlock: Up and running. Mar 14 00:13:49.886586 kernel: SELinux: Initializing. Mar 14 00:13:49.886593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:49.886600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:49.886607 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:49.886614 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:49.886621 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:49.886628 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:49.886634 kernel: Platform MSI: ITS@0x8080000 domain created Mar 14 00:13:49.886641 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 14 00:13:49.886650 kernel: Remapping and enabling EFI services. Mar 14 00:13:49.886657 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:49.886664 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:49.886671 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 14 00:13:49.886677 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 14 00:13:49.886684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:13:49.886691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 14 00:13:49.886698 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:49.886705 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:49.886712 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:49.886720 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 14 00:13:49.886727 kernel: CPU features: detected: Common not Private translations Mar 14 00:13:49.886738 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:49.886747 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 14 00:13:49.886754 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 14 00:13:49.886762 kernel: CPU features: detected: LSE atomic instructions Mar 14 00:13:49.886769 kernel: CPU features: detected: Privileged Access Never Mar 14 00:13:49.886776 kernel: CPU features: detected: RAS Extension Support Mar 14 00:13:49.886785 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 14 00:13:49.886792 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:49.886800 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:49.886807 kernel: devtmpfs: initialized Mar 14 00:13:49.886814 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:49.886822 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:49.886829 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:49.886836 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:49.886845 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 14 00:13:49.886852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:49.886860 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:49.886867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:49.886874 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:49.886882 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:49.886889 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:49.886896 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:49.886904 kernel: cpuidle: using governor menu Mar 14 00:13:49.886912 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:49.886920 kernel: ASID allocator initialised with 32768 entries Mar 14 00:13:49.886927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:49.886934 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:49.886941 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 14 00:13:49.886949 kernel: Modules: 0 pages in range for non-PLT usage Mar 14 00:13:49.886956 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:49.886963 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:49.886971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:49.886979 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:49.886987 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:49.886994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:49.887001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:49.887009 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:49.887016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:49.887023 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:49.887047 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:49.887055 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:49.887064 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:49.887085 kernel: ACPI: Interpreter enabled Mar 14 00:13:49.887098 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:49.887105 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:49.887112 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 14 00:13:49.887120 kernel: printk: console [ttyAMA0] enabled Mar 14 00:13:49.887127 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:13:49.887274 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:49.887353 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:49.887419 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:49.887497 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 14 00:13:49.887577 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 14 00:13:49.887587 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 14 00:13:49.887595 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:49.887666 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 14 00:13:49.887730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:49.887793 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 14 00:13:49.887852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:13:49.887934 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 14 00:13:49.888011 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 14 00:13:49.888080 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 14 00:13:49.888147 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:13:49.888224 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.888332 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 14 00:13:49.888415 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.888483 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 14 00:13:49.888603 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.888670 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 14 00:13:49.888745 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.888810 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 14 00:13:49.888886 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.888953 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 14 00:13:49.889025 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.889090 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 14 00:13:49.889163 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.889228 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 14 00:13:49.889312 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.889387 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 14 00:13:49.889460 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:49.889556 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 14 00:13:49.889645 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 14 00:13:49.889712 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 14 00:13:49.889787 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:13:49.889855 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 14 00:13:49.889923 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:13:49.889991 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:13:49.890065 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:13:49.890136 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 14 00:13:49.890211 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:13:49.890294 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 14 00:13:49.890363 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 14 00:13:49.890436 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:13:49.890552 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 14 00:13:49.890635 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:13:49.890702 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 14 00:13:49.890767 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 14 00:13:49.890839 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:13:49.890905 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 14 00:13:49.890972 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:13:49.891048 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:13:49.891115 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 14 00:13:49.891181 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 14 00:13:49.891285 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:13:49.891380 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 14 00:13:49.891449 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:13:49.891533 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:13:49.891605 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 14 00:13:49.891671 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 14 00:13:49.891736 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 14 00:13:49.891808 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 00:13:49.891876 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:13:49.891942 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:13:49.892009 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 00:13:49.892075 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 14 00:13:49.892145 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 14 00:13:49.892212 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 00:13:49.892288 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:13:49.892355 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:13:49.892423 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 00:13:49.892526 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:13:49.892598 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:13:49.892669 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:13:49.892732 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:13:49.892796 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:13:49.892861 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:13:49.892925 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:13:49.892989 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:13:49.893054 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:13:49.893118 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:13:49.893185 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:13:49.893279 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 14 00:13:49.893350 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:49.893415 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 14 00:13:49.893483 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:49.893572 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 14 00:13:49.893647 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:49.893713 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 14 00:13:49.893779 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:49.893845 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 14 00:13:49.893911 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:49.893977 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 14 00:13:49.894042 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:49.894110 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 14 00:13:49.894176 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:49.894249 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 14 00:13:49.894319 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:49.894386 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 14 00:13:49.894452 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:49.894585 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 14 00:13:49.894661 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 14 00:13:49.894726 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 14 00:13:49.894791 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:13:49.894856 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 14 00:13:49.894923 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:13:49.894987 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 14 00:13:49.895053 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:13:49.895143 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 14 00:13:49.895215 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 00:13:49.895300 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 14 00:13:49.895366 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 00:13:49.895447 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 14 00:13:49.895592 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 00:13:49.895661 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 14 00:13:49.895725 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 00:13:49.895790 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 14 00:13:49.895860 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 00:13:49.897596 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 14 00:13:49.897698 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 14 00:13:49.897771 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 14 00:13:49.897851 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 14 00:13:49.897920 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:13:49.897988 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 14 00:13:49.898054 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:13:49.898127 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 00:13:49.898192 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 14 00:13:49.898272 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:49.898347 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 14 00:13:49.898419 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:13:49.898606 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 00:13:49.898690 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 14 00:13:49.898755 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:49.898828 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:13:49.898896 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 14 00:13:49.898961 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:13:49.899025 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 00:13:49.899094 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 14 00:13:49.899159 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:49.899230 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:13:49.899350 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:13:49.899419 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 00:13:49.899485 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 14 00:13:49.899567 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:49.899650 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 14 00:13:49.900676 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 14 00:13:49.900780 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:13:49.900855 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 00:13:49.900921 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 14 00:13:49.900985 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:49.901059 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 14 00:13:49.901126 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 14 00:13:49.901191 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:13:49.901276 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 00:13:49.901342 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 14 00:13:49.901405 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:49.901475 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 14 00:13:49.901556 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 14 00:13:49.901624 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 14 00:13:49.901690 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:13:49.901754 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 00:13:49.901824 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 14 00:13:49.901903 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:49.904203 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:13:49.904307 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 00:13:49.904377 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 14 00:13:49.904441 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:49.904565 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:13:49.904636 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 14 00:13:49.904713 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 14 00:13:49.904778 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:49.904845 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 14 00:13:49.904903 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:49.904959 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 14 00:13:49.905049 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 00:13:49.905402 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 14 00:13:49.905485 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:49.905572 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 14 00:13:49.905649 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 14 00:13:49.905722 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:49.905839 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 14 00:13:49.905924 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 14 00:13:49.906007 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:49.906090 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 14 00:13:49.906166 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 14 00:13:49.906271 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:49.906357 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 14 00:13:49.906433 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 14 00:13:49.906536 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:49.906626 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 14 00:13:49.906703 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 14 00:13:49.906791 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:49.906875 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 14 00:13:49.906961 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 14 00:13:49.907036 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:49.907120 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 14 00:13:49.907195 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 14 00:13:49.907316 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:49.907410 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 14 00:13:49.907871 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 14 00:13:49.907946 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:49.907957 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:49.907965 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:49.907973 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:49.907981 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:49.907989 kernel: iommu: Default domain type: Translated Mar 14 00:13:49.907997 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:49.908004 kernel: efivars: Registered efivars operations Mar 14 00:13:49.908012 kernel: vgaarb: loaded Mar 14 00:13:49.908022 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:49.908031 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:49.908038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:49.908046 kernel: pnp: PnP ACPI init Mar 14 00:13:49.908126 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 14 00:13:49.908138 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:49.908146 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:49.908154 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:49.908165 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:49.908173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:49.908181 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:49.908189 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:49.908196 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:49.908204 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:49.908212 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:49.908220 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:49.908306 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 14 00:13:49.908322 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:49.908330 kernel: kvm [1]: HYP mode not available Mar 14 00:13:49.908337 kernel: Initialise system trusted keyrings Mar 14 00:13:49.908346 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:49.908354 kernel: Key type asymmetric registered Mar 14 00:13:49.908362 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:49.908369 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:49.908377 kernel: io scheduler mq-deadline registered Mar 14 00:13:49.908385 kernel: io scheduler kyber registered Mar 14 00:13:49.908395 kernel: io scheduler bfq registered Mar 14 00:13:49.908403 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:49.908470 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 14 00:13:49.908548 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 14 00:13:49.908613 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.908678 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 14 00:13:49.908743 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 14 00:13:49.908811 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.908877 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 14 00:13:49.908942 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 14 00:13:49.909007 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.909081 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 14 00:13:49.909147 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 14 00:13:49.909215 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.909293 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 14 00:13:49.909362 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 14 00:13:49.909427 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.909516 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 14 00:13:49.909591 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 14 00:13:49.909663 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.909730 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 14 00:13:49.909795 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 14 00:13:49.909860 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.909927 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 14 00:13:49.909994 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 14 00:13:49.910060 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.910071 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 14 00:13:49.910136 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 14 00:13:49.910202 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 14 00:13:49.910304 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:49.910321 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:49.910331 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:49.910339 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:49.910416 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 14 00:13:49.910522 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 14 00:13:49.910536 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:49.910544 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:49.910617 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 14 00:13:49.910628 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 14 00:13:49.910636 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:49.910647 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:49.910655 kernel: nicpf, ver 1.0 Mar 14 00:13:49.910662 kernel: nicvf, ver 1.0 Mar 14 00:13:49.910739 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:49.910802 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:49 UTC (1773447229) Mar 14 00:13:49.910812 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:49.910821 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 14 00:13:49.910828 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:49.910839 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:49.910847 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:49.910854 kernel: Segment Routing with IPv6 Mar 14 00:13:49.910862 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:49.910870 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:49.910877 kernel: Key type dns_resolver registered Mar 14 00:13:49.910885 kernel: registered taskstats version 1 Mar 14 00:13:49.910893 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:49.910901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:49.910910 kernel: Key type .fscrypt registered Mar 14 00:13:49.910918 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:49.910926 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:49.910934 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:49.910942 kernel: ima: No architecture policies found Mar 14 00:13:49.910949 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:49.910957 kernel: clk: Disabling unused clocks Mar 14 00:13:49.910965 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:49.910972 kernel: Run /init as init process Mar 14 00:13:49.910982 kernel: with arguments: Mar 14 00:13:49.910990 kernel: /init Mar 14 00:13:49.911009 kernel: with environment: Mar 14 00:13:49.911017 kernel: HOME=/ Mar 14 00:13:49.911026 kernel: TERM=linux Mar 14 00:13:49.911035 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:49.911045 systemd[1]: Detected virtualization kvm. Mar 14 00:13:49.911053 systemd[1]: Detected architecture arm64. Mar 14 00:13:49.911063 systemd[1]: Running in initrd. Mar 14 00:13:49.911072 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:49.911080 systemd[1]: Hostname set to . Mar 14 00:13:49.911088 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:49.911096 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:49.911104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:49.911112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:49.911121 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:49.911131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:49.911140 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:49.911148 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:49.911162 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:49.911171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:49.911179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:49.911188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:49.911199 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:49.911207 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:49.911216 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:49.911224 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:49.911232 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:49.911248 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:49.911257 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:49.911265 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:49.911275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:49.911284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:49.911292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:49.911301 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:49.911309 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:49.911317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:49.911326 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:49.911334 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:49.911342 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:49.911352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:49.911360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:49.911390 systemd-journald[237]: Collecting audit messages is disabled. Mar 14 00:13:49.911410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:49.911421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:49.911429 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:49.911438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:49.911447 systemd-journald[237]: Journal started Mar 14 00:13:49.911468 systemd-journald[237]: Runtime Journal (/run/log/journal/1526eafa528c45d68d0aecd8ca03e2b7) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:13:49.900778 systemd-modules-load[238]: Inserted module 'overlay' Mar 14 00:13:49.914606 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:49.922507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:49.923659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:49.925514 kernel: Bridge firewalling registered Mar 14 00:13:49.925090 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 14 00:13:49.926700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:49.928705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:49.931583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:49.932539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:49.938647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:49.941857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:49.943644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:49.954581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:49.956834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:49.965665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:49.974111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:49.987927 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:50.003508 systemd-resolved[270]: Positive Trust Anchors: Mar 14 00:13:50.003533 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:50.003603 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:50.010004 systemd-resolved[270]: Defaulting to hostname 'linux'. Mar 14 00:13:50.011082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:50.011763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:50.014728 dracut-cmdline[274]: dracut-dracut-053 Mar 14 00:13:50.017200 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:50.096522 kernel: SCSI subsystem initialized Mar 14 00:13:50.100513 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:50.108517 kernel: iscsi: registered transport (tcp) Mar 14 00:13:50.121787 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:50.121834 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:50.173179 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:50.179813 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:50.200943 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:50.201076 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:50.201106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:50.265503 kernel: raid6: neonx8 gen() 15710 MB/s Mar 14 00:13:50.266561 kernel: raid6: neonx4 gen() 15571 MB/s Mar 14 00:13:50.283537 kernel: raid6: neonx2 gen() 13190 MB/s Mar 14 00:13:50.300535 kernel: raid6: neonx1 gen() 10438 MB/s Mar 14 00:13:50.317535 kernel: raid6: int64x8 gen() 6938 MB/s Mar 14 00:13:50.334542 kernel: raid6: int64x4 gen() 7312 MB/s Mar 14 00:13:50.351557 kernel: raid6: int64x2 gen() 6112 MB/s Mar 14 00:13:50.368535 kernel: raid6: int64x1 gen() 5037 MB/s Mar 14 00:13:50.368588 kernel: raid6: using algorithm neonx8 gen() 15710 MB/s Mar 14 00:13:50.385545 kernel: raid6: .... xor() 11947 MB/s, rmw enabled Mar 14 00:13:50.385587 kernel: raid6: using neon recovery algorithm Mar 14 00:13:50.390755 kernel: xor: measuring software checksum speed Mar 14 00:13:50.390823 kernel: 8regs : 19778 MB/sec Mar 14 00:13:50.390846 kernel: 32regs : 19636 MB/sec Mar 14 00:13:50.390866 kernel: arm64_neon : 26981 MB/sec Mar 14 00:13:50.391525 kernel: xor: using function: arm64_neon (26981 MB/sec) Mar 14 00:13:50.441591 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:50.456009 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:50.463754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:50.483306 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 14 00:13:50.489148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:50.497807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:50.513033 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 14 00:13:50.545330 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:50.559220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:50.610060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:50.616657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:50.637098 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:50.639027 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:50.640629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:50.641223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:50.648692 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:50.664458 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:50.718526 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:13:50.721550 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:13:50.721616 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:13:50.730518 kernel: ACPI: bus type USB registered Mar 14 00:13:50.730562 kernel: usbcore: registered new interface driver usbfs Mar 14 00:13:50.731693 kernel: usbcore: registered new interface driver hub Mar 14 00:13:50.731718 kernel: usbcore: registered new device driver usb Mar 14 00:13:50.733040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:50.734836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:50.738336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:50.739134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:50.739278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:50.740697 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:50.748847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:50.764432 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 14 00:13:50.766533 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 14 00:13:50.766709 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:13:50.766807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:50.769516 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:13:50.777647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:50.781043 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 14 00:13:50.781200 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 14 00:13:50.781331 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 14 00:13:50.781417 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 14 00:13:50.783551 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:13:50.783691 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:13:50.786323 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:50.786365 kernel: GPT:17805311 != 80003071 Mar 14 00:13:50.786376 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:13:50.786536 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:50.787126 kernel: GPT:17805311 != 80003071 Mar 14 00:13:50.787157 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:50.787675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:50.788767 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 14 00:13:50.788925 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:13:50.790667 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:13:50.794138 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:13:50.794315 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:13:50.797765 kernel: hub 1-0:1.0: USB hub found Mar 14 00:13:50.797940 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:13:50.800727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:50.803587 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:13:50.803789 kernel: hub 2-0:1.0: USB hub found Mar 14 00:13:50.803885 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:13:50.832518 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (519) Mar 14 00:13:50.834520 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Mar 14 00:13:50.842209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:13:50.849854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:13:50.854729 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:13:50.864386 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:13:50.865211 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:13:50.872872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:50.884925 disk-uuid[572]: Primary Header is updated. Mar 14 00:13:50.884925 disk-uuid[572]: Secondary Entries is updated. Mar 14 00:13:50.884925 disk-uuid[572]: Secondary Header is updated. Mar 14 00:13:50.903518 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:51.043550 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:13:51.179576 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 14 00:13:51.179869 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:13:51.180449 kernel: usbcore: registered new interface driver usbhid Mar 14 00:13:51.180471 kernel: usbhid: USB HID core driver Mar 14 00:13:51.285648 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 14 00:13:51.415559 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 14 00:13:51.470428 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 14 00:13:51.904542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:51.904934 disk-uuid[573]: The operation has completed successfully. Mar 14 00:13:51.964621 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:51.964731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:51.983776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:51.989709 sh[584]: Success Mar 14 00:13:52.003515 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:52.046985 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:52.055696 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:52.059550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:52.077574 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:52.077653 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:52.077678 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:52.078950 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:52.079927 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:52.086524 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:52.088390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:52.089175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:52.098874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:52.102174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:52.117512 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:52.117562 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:52.117574 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:52.122620 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:52.122663 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:52.133321 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:52.134564 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:52.143604 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:52.152851 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:52.224676 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:52.232714 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:52.245053 ignition[690]: Ignition 2.19.0 Mar 14 00:13:52.245754 ignition[690]: Stage: fetch-offline Mar 14 00:13:52.246192 ignition[690]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:52.246694 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:52.246858 ignition[690]: parsed url from cmdline: "" Mar 14 00:13:52.246861 ignition[690]: no config URL provided Mar 14 00:13:52.246866 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:52.246873 ignition[690]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:52.249729 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:52.246879 ignition[690]: failed to fetch config: resource requires networking Mar 14 00:13:52.247331 ignition[690]: Ignition finished successfully Mar 14 00:13:52.257209 systemd-networkd[770]: lo: Link UP Mar 14 00:13:52.257221 systemd-networkd[770]: lo: Gained carrier Mar 14 00:13:52.258838 systemd-networkd[770]: Enumeration completed Mar 14 00:13:52.258924 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:52.260028 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:52.260031 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:52.260731 systemd[1]: Reached target network.target - Network. Mar 14 00:13:52.261818 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:52.261821 systemd-networkd[770]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:52.262340 systemd-networkd[770]: eth0: Link UP Mar 14 00:13:52.262343 systemd-networkd[770]: eth0: Gained carrier Mar 14 00:13:52.262350 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:52.265808 systemd-networkd[770]: eth1: Link UP Mar 14 00:13:52.265811 systemd-networkd[770]: eth1: Gained carrier Mar 14 00:13:52.265819 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:52.271393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:13:52.283323 ignition[773]: Ignition 2.19.0 Mar 14 00:13:52.283332 ignition[773]: Stage: fetch Mar 14 00:13:52.283527 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:52.283537 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:52.283621 ignition[773]: parsed url from cmdline: "" Mar 14 00:13:52.283624 ignition[773]: no config URL provided Mar 14 00:13:52.283629 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:52.283635 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:52.283653 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:13:52.284331 ignition[773]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:13:52.304600 systemd-networkd[770]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:13:52.333626 systemd-networkd[770]: eth0: DHCPv4 address 168.119.153.241/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:13:52.484591 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:13:52.490307 ignition[773]: GET result: OK Mar 14 00:13:52.490471 ignition[773]: parsing config with SHA512: 384b648fd059643810569ba36cf8e02f97e641553fb5845523fdd879ba826a46aff6bbd524f385d0e2d7fff036218f3a848da32cfcf87351d9d3e3cb0b6c5848 Mar 14 00:13:52.496997 unknown[773]: fetched base config from "system" Mar 14 00:13:52.497008 unknown[773]: fetched base config from "system" Mar 14 00:13:52.498042 ignition[773]: fetch: fetch complete Mar 14 00:13:52.497013 unknown[773]: fetched user config from "hetzner" Mar 14 00:13:52.498047 ignition[773]: fetch: fetch passed Mar 14 00:13:52.498119 ignition[773]: Ignition finished successfully Mar 14 00:13:52.500207 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:13:52.504681 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:52.519825 ignition[781]: Ignition 2.19.0 Mar 14 00:13:52.519836 ignition[781]: Stage: kargs Mar 14 00:13:52.520024 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:52.520033 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:52.521124 ignition[781]: kargs: kargs passed Mar 14 00:13:52.521176 ignition[781]: Ignition finished successfully Mar 14 00:13:52.524984 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:52.533781 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:52.546953 ignition[787]: Ignition 2.19.0 Mar 14 00:13:52.546977 ignition[787]: Stage: disks Mar 14 00:13:52.547363 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:52.547387 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:52.549763 ignition[787]: disks: disks passed Mar 14 00:13:52.549829 ignition[787]: Ignition finished successfully Mar 14 00:13:52.551247 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:52.553091 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:52.554087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:52.556200 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:52.557616 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:52.558575 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:52.564834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:52.583310 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:13:52.587588 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:52.594718 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:52.644528 kernel: EXT4-fs (sda9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:13:52.645054 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:52.646292 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:52.657661 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:52.661212 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:52.663962 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:13:52.667954 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:52.668010 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:52.677877 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Mar 14 00:13:52.677899 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:52.677910 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:52.677920 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:52.677939 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:52.681009 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:52.683680 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:52.683704 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:52.688709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:52.734393 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:52.739892 coreos-metadata[806]: Mar 14 00:13:52.739 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:13:52.743559 coreos-metadata[806]: Mar 14 00:13:52.743 INFO Fetch successful Mar 14 00:13:52.743559 coreos-metadata[806]: Mar 14 00:13:52.743 INFO wrote hostname ci-4081-3-6-n-c13e9e2860 to /sysroot/etc/hostname Mar 14 00:13:52.746474 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:52.748193 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:13:52.753550 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:52.758249 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:52.855906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:52.865652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:52.868763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:52.876552 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:52.901461 ignition[923]: INFO : Ignition 2.19.0 Mar 14 00:13:52.903349 ignition[923]: INFO : Stage: mount Mar 14 00:13:52.903349 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:52.903349 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:52.903349 ignition[923]: INFO : mount: mount passed Mar 14 00:13:52.903349 ignition[923]: INFO : Ignition finished successfully Mar 14 00:13:52.902446 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:52.905751 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:52.911640 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:53.078887 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:53.086797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:53.096177 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Mar 14 00:13:53.096251 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:53.096271 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:53.096698 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:53.099512 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:53.099549 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:53.102294 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:53.126829 ignition[951]: INFO : Ignition 2.19.0 Mar 14 00:13:53.127872 ignition[951]: INFO : Stage: files Mar 14 00:13:53.128556 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:53.129561 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:53.130975 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:53.132935 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:53.132935 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:53.138124 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:53.139213 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:53.140575 unknown[951]: wrote ssh authorized keys file for user: core Mar 14 00:13:53.141471 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:53.146058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:53.146058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:53.190784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:13:53.283437 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:53.283437 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:53.283437 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:53.352134 systemd-networkd[770]: eth1: Gained IPv6LL Mar 14 00:13:53.518269 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:13:53.603274 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:53.603274 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:53.607058 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 14 00:13:53.850270 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:13:54.083280 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:54.083280 ignition[951]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:54.089517 ignition[951]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:54.089517 ignition[951]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:54.089517 ignition[951]: INFO : files: files passed Mar 14 00:13:54.089517 ignition[951]: INFO : Ignition finished successfully Mar 14 00:13:54.092827 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:54.104036 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:54.105881 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:54.108971 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:54.109698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:54.128502 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:54.128502 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:54.131806 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:54.134119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:54.135053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:54.153830 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:54.183677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:54.185374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:54.189034 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:54.189880 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:54.191247 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:54.196740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:54.212295 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:54.217791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:54.233205 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:54.234811 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:54.235624 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:54.236680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:54.236855 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:54.238110 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:54.239290 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:54.240161 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:54.241177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:54.242462 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:54.243593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:54.244674 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:54.245751 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:54.246880 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:54.247953 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:54.248802 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:54.248974 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:54.250163 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:54.251319 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:54.252351 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:54.252847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:54.253681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:54.253837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:54.255412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:54.255590 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:54.256940 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:54.257081 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:54.257956 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:13:54.258102 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:13:54.263775 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:54.264399 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:54.265680 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:54.270760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:54.271272 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:54.271435 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:54.274699 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:54.274858 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:54.283001 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:54.283106 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:54.287875 ignition[1004]: INFO : Ignition 2.19.0 Mar 14 00:13:54.287875 ignition[1004]: INFO : Stage: umount Mar 14 00:13:54.287875 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:54.287875 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:54.292162 ignition[1004]: INFO : umount: umount passed Mar 14 00:13:54.292162 ignition[1004]: INFO : Ignition finished successfully Mar 14 00:13:54.295862 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:54.299287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:54.300957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:54.301428 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:54.301467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:54.303728 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:54.303773 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:54.309350 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:13:54.309403 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:13:54.310947 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:54.313475 systemd-networkd[770]: eth0: Gained IPv6LL Mar 14 00:13:54.314330 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:54.314419 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:54.316260 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:54.317082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:54.320894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:54.326577 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:54.327344 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:54.328976 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:54.329044 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:54.330670 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:54.330711 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:54.332266 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:54.332317 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:54.333873 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:54.333918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:54.336582 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:54.337522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:54.340206 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:54.340366 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:54.341564 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:54.341605 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:54.342788 systemd-networkd[770]: eth1: DHCPv6 lease lost Mar 14 00:13:54.347567 systemd-networkd[770]: eth0: DHCPv6 lease lost Mar 14 00:13:54.349422 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:54.349736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:54.351863 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:54.352430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:54.354273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:54.354331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:54.359714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:54.360205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:54.360271 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:54.361355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:54.361396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:54.362970 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:54.363016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:54.366131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:54.366180 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:54.368442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:54.381698 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:54.382474 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:54.389990 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:54.391278 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:54.394133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:54.394205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:54.395640 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:54.395675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:54.397367 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:54.397413 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:54.399778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:54.399823 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:54.401286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:54.401335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:54.410656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:54.411349 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:54.411410 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:54.415264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:54.415319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:54.419059 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:54.419175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:54.420381 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:54.425721 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:54.433901 systemd[1]: Switching root. Mar 14 00:13:54.464202 systemd-journald[237]: Journal stopped Mar 14 00:13:55.347382 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:55.347441 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:55.347454 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:55.347468 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:55.347477 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:55.347486 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:55.351542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:55.351556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:55.351566 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:55.351576 kernel: audit: type=1403 audit(1773447234.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:55.351587 systemd[1]: Successfully loaded SELinux policy in 34.970ms. Mar 14 00:13:55.351613 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.576ms. Mar 14 00:13:55.351624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:55.351640 systemd[1]: Detected virtualization kvm. Mar 14 00:13:55.351650 systemd[1]: Detected architecture arm64. Mar 14 00:13:55.351662 systemd[1]: Detected first boot. Mar 14 00:13:55.351673 systemd[1]: Hostname set to . Mar 14 00:13:55.351684 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:55.351694 zram_generator::config[1049]: No configuration found. Mar 14 00:13:55.351705 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:55.351715 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:13:55.351724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:13:55.351735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:55.351747 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:55.351758 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:55.351768 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:55.351782 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:55.351793 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:55.351804 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:55.351814 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:55.351824 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:55.351835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:55.351846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:55.351856 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:55.351866 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:55.351877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:55.351892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:55.351902 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 14 00:13:55.351912 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:55.351923 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:13:55.351939 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:13:55.351949 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:55.351959 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:55.351974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:55.351985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:55.351999 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:55.352010 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:55.352022 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:55.352032 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:55.352042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:55.352053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:55.352063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:55.352074 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:55.352084 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:55.352094 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:55.352104 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:55.352118 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:55.352129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:55.352139 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:55.352149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:55.352160 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:55.352170 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:55.352181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:55.352199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:55.352239 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:55.352250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:55.352261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:55.352271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:55.352281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:55.352292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:55.352304 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:55.352314 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:13:55.352324 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:13:55.352335 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:13:55.352345 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:13:55.352355 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:55.352365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:55.352376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:55.352388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:55.352398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:55.352408 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:13:55.352418 systemd[1]: Stopped verity-setup.service. Mar 14 00:13:55.352428 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:55.352439 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:55.352450 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:55.352461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:55.352471 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:55.352483 kernel: fuse: init (API version 7.39) Mar 14 00:13:55.355560 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:55.355584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:55.355596 kernel: loop: module loaded Mar 14 00:13:55.355606 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:55.355624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:55.355635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:55.355646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:55.355657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:55.355671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:55.355681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:55.355692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:55.355704 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:55.355715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:55.355725 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:55.355736 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:55.355746 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:55.355757 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:55.355767 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:55.355780 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:55.355790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:55.355801 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:55.355836 systemd-journald[1126]: Collecting audit messages is disabled. Mar 14 00:13:55.355861 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:55.355871 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:55.355880 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:55.355891 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:55.355903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:55.355916 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:55.355927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:55.355937 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:55.355951 systemd-journald[1126]: Journal started Mar 14 00:13:55.355973 systemd-journald[1126]: Runtime Journal (/run/log/journal/1526eafa528c45d68d0aecd8ca03e2b7) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:13:55.357594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:55.027785 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:55.043475 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:13:55.044051 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:13:55.364429 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:55.370817 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:55.374657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:55.386511 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:55.390932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:55.395755 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:55.401512 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:55.415069 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:55.420453 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:55.422320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:55.425924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:55.428857 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:55.435719 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:55.457540 kernel: loop0: detected capacity change from 0 to 209336 Mar 14 00:13:55.461077 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:55.472740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:55.477686 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:55.481830 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:55.483908 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:55.487730 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:55.494682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:55.503766 systemd-journald[1126]: Time spent on flushing to /var/log/journal/1526eafa528c45d68d0aecd8ca03e2b7 is 35.241ms for 1136 entries. Mar 14 00:13:55.503766 systemd-journald[1126]: System Journal (/var/log/journal/1526eafa528c45d68d0aecd8ca03e2b7) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:13:55.553581 systemd-journald[1126]: Received client request to flush runtime journal. Mar 14 00:13:55.553634 kernel: loop1: detected capacity change from 0 to 114328 Mar 14 00:13:55.522728 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:13:55.554817 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Mar 14 00:13:55.555088 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Mar 14 00:13:55.562170 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:55.566556 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:55.568519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:55.573653 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:55.595530 kernel: loop2: detected capacity change from 0 to 114432 Mar 14 00:13:55.635523 kernel: loop3: detected capacity change from 0 to 8 Mar 14 00:13:55.652521 kernel: loop4: detected capacity change from 0 to 209336 Mar 14 00:13:55.675517 kernel: loop5: detected capacity change from 0 to 114328 Mar 14 00:13:55.693551 kernel: loop6: detected capacity change from 0 to 114432 Mar 14 00:13:55.714515 kernel: loop7: detected capacity change from 0 to 8 Mar 14 00:13:55.716189 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:13:55.716959 (sd-merge)[1190]: Merged extensions into '/usr'. Mar 14 00:13:55.723602 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:55.723622 systemd[1]: Reloading... Mar 14 00:13:55.804532 zram_generator::config[1216]: No configuration found. Mar 14 00:13:55.886067 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:13:55.928687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:55.974551 systemd[1]: Reloading finished in 250 ms. Mar 14 00:13:55.997567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:13:56.000822 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:13:56.008902 systemd[1]: Starting ensure-sysext.service... Mar 14 00:13:56.013712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:56.026266 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:13:56.026280 systemd[1]: Reloading... Mar 14 00:13:56.053275 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:13:56.053569 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:13:56.054195 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:13:56.054424 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 14 00:13:56.054470 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 14 00:13:56.056979 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:56.056997 systemd-tmpfiles[1254]: Skipping /boot Mar 14 00:13:56.063900 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:56.063916 systemd-tmpfiles[1254]: Skipping /boot Mar 14 00:13:56.098513 zram_generator::config[1281]: No configuration found. Mar 14 00:13:56.188546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:56.234347 systemd[1]: Reloading finished in 207 ms. Mar 14 00:13:56.257540 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:13:56.263891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:56.279970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:56.285299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:13:56.288788 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:13:56.297826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:56.300766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:56.309826 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:13:56.314420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:56.320792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:56.324827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:56.328117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:56.330009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:56.331843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:56.331988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:56.336747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:13:56.340336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:56.344774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:56.345749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:56.351429 systemd[1]: Finished ensure-sysext.service. Mar 14 00:13:56.366279 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:13:56.369159 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Mar 14 00:13:56.369443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:13:56.370690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:56.370852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:56.372107 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:56.372249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:56.384864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:56.385552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:56.391152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:56.395682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:13:56.396701 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:13:56.397749 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:56.397890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:56.400431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:56.416544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:56.416772 augenrules[1355]: No rules Mar 14 00:13:56.424809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:56.429994 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:56.432974 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:13:56.460742 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:13:56.471625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:13:56.472457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:56.572253 systemd-networkd[1364]: lo: Link UP Mar 14 00:13:56.575533 systemd-networkd[1364]: lo: Gained carrier Mar 14 00:13:56.578166 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:13:56.579673 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:13:56.598606 systemd-resolved[1330]: Positive Trust Anchors: Mar 14 00:13:56.599418 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:56.599721 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:56.606707 systemd-resolved[1330]: Using system hostname 'ci-4081-3-6-n-c13e9e2860'. Mar 14 00:13:56.608207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:56.609066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:56.611503 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:13:56.612994 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 14 00:13:56.619897 systemd-networkd[1364]: Enumeration completed Mar 14 00:13:56.620327 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.620393 systemd-networkd[1364]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:56.620558 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:56.621367 systemd[1]: Reached target network.target - Network. Mar 14 00:13:56.622838 systemd-networkd[1364]: eth1: Link UP Mar 14 00:13:56.622906 systemd-networkd[1364]: eth1: Gained carrier Mar 14 00:13:56.622971 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.627739 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:13:56.649376 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.649570 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:56.650655 systemd-networkd[1364]: eth0: Link UP Mar 14 00:13:56.650738 systemd-networkd[1364]: eth0: Gained carrier Mar 14 00:13:56.650813 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.666782 systemd-networkd[1364]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:13:56.667734 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Mar 14 00:13:56.675992 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.701736 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:13:56.701859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:56.713785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:56.715514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:56.719511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:56.720756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:56.720793 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:56.724997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:56.725168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:56.729725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:56.730564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:56.731656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:56.739659 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1363) Mar 14 00:13:56.739763 systemd-networkd[1364]: eth0: DHCPv4 address 168.119.153.241/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:13:56.740568 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Mar 14 00:13:56.742617 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:56.742777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:56.746330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:56.758076 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 14 00:13:56.758133 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:13:56.758150 kernel: [drm] features: -context_init Mar 14 00:13:56.774679 kernel: [drm] number of scanouts: 1 Mar 14 00:13:56.780648 kernel: [drm] number of cap sets: 0 Mar 14 00:13:56.791660 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:13:56.803307 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:13:56.810787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:56.812009 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:13:56.815435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:13:56.823682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:13:56.834030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:56.834711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:56.843658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:56.844627 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:13:56.896709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:56.984139 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:13:56.991850 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:13:57.005360 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:57.033137 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:13:57.035813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:57.037087 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:57.038148 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:13:57.039042 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:13:57.039933 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:13:57.040651 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:13:57.041336 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:13:57.042086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:13:57.042121 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:57.042653 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:57.044207 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:13:57.046339 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:13:57.052013 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:13:57.055419 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:13:57.056968 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:13:57.057945 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:57.058645 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:57.059234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:57.059262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:57.063700 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:13:57.069661 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:13:57.071288 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:57.071940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:13:57.080597 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:13:57.084647 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:13:57.085384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:13:57.089756 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:13:57.092483 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:13:57.096660 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:13:57.099380 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:13:57.104664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:13:57.111182 jq[1445]: false Mar 14 00:13:57.114886 dbus-daemon[1442]: [system] SELinux support is enabled Mar 14 00:13:57.120958 coreos-metadata[1441]: Mar 14 00:13:57.112 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:13:57.120958 coreos-metadata[1441]: Mar 14 00:13:57.115 INFO Fetch successful Mar 14 00:13:57.120958 coreos-metadata[1441]: Mar 14 00:13:57.115 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:13:57.120958 coreos-metadata[1441]: Mar 14 00:13:57.115 INFO Fetch successful Mar 14 00:13:57.123653 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:13:57.125385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:13:57.126865 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:13:57.127666 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:13:57.130713 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:13:57.132114 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:13:57.141389 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:13:57.141789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:13:57.149804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:13:57.161958 extend-filesystems[1446]: Found loop4 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found loop5 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found loop6 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found loop7 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda1 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda2 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda3 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found usr Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda4 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda6 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda7 Mar 14 00:13:57.161958 extend-filesystems[1446]: Found sda9 Mar 14 00:13:57.161958 extend-filesystems[1446]: Checking size of /dev/sda9 Mar 14 00:13:57.220686 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 14 00:13:57.220736 jq[1454]: true Mar 14 00:13:57.149957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:13:57.222055 extend-filesystems[1446]: Resized partition /dev/sda9 Mar 14 00:13:57.163640 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:13:57.225173 tar[1461]: linux-arm64/LICENSE Mar 14 00:13:57.225173 tar[1461]: linux-arm64/helm Mar 14 00:13:57.228468 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:13:57.163996 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:13:57.181090 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:13:57.194268 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:13:57.194306 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:13:57.199145 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:13:57.235982 jq[1466]: true Mar 14 00:13:57.199169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:13:57.244366 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:13:57.276546 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1377) Mar 14 00:13:57.279993 update_engine[1453]: I20260314 00:13:57.279340 1453 main.cc:92] Flatcar Update Engine starting Mar 14 00:13:57.288586 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:13:57.292623 update_engine[1453]: I20260314 00:13:57.288988 1453 update_check_scheduler.cc:74] Next update check in 5m2s Mar 14 00:13:57.298703 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:13:57.327143 systemd-logind[1452]: New seat seat0. Mar 14 00:13:57.331788 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:13:57.336647 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 14 00:13:57.337301 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:13:57.340829 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:13:57.345180 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:13:57.387566 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 14 00:13:57.407780 extend-filesystems[1481]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:13:57.407780 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 14 00:13:57.407780 extend-filesystems[1481]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 14 00:13:57.420608 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Mar 14 00:13:57.420608 extend-filesystems[1446]: Found sr0 Mar 14 00:13:57.427738 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:57.408808 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:13:57.408986 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:13:57.419817 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:13:57.431784 systemd[1]: Starting sshkeys.service... Mar 14 00:13:57.450440 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:13:57.453780 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:13:57.500591 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:13:57.514075 coreos-metadata[1523]: Mar 14 00:13:57.513 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:13:57.515599 coreos-metadata[1523]: Mar 14 00:13:57.515 INFO Fetch successful Mar 14 00:13:57.520810 unknown[1523]: wrote ssh authorized keys file for user: core Mar 14 00:13:57.548110 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:57.549147 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:13:57.554557 systemd[1]: Finished sshkeys.service. Mar 14 00:13:57.628673 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:13:57.643668 containerd[1484]: time="2026-03-14T00:13:57.643575960Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:13:57.659061 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:13:57.671834 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:13:57.686272 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:13:57.686532 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:13:57.697072 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:13:57.708391 containerd[1484]: time="2026-03-14T00:13:57.707788840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.709270 containerd[1484]: time="2026-03-14T00:13:57.709082640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:57.709270 containerd[1484]: time="2026-03-14T00:13:57.709119760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:13:57.709270 containerd[1484]: time="2026-03-14T00:13:57.709150840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:13:57.709372 containerd[1484]: time="2026-03-14T00:13:57.709329200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:13:57.709372 containerd[1484]: time="2026-03-14T00:13:57.709347360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.709413 containerd[1484]: time="2026-03-14T00:13:57.709402640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:57.709434 containerd[1484]: time="2026-03-14T00:13:57.709416640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709587760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709609560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709622200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709632360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709709240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709889960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709981680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.709995120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.710077080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:13:57.710527 containerd[1484]: time="2026-03-14T00:13:57.710121200Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:13:57.715484 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:13:57.717274 containerd[1484]: time="2026-03-14T00:13:57.717177680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:13:57.717274 containerd[1484]: time="2026-03-14T00:13:57.717268080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:13:57.717759 containerd[1484]: time="2026-03-14T00:13:57.717284520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:13:57.717759 containerd[1484]: time="2026-03-14T00:13:57.717301320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:13:57.717759 containerd[1484]: time="2026-03-14T00:13:57.717316320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:13:57.718840 containerd[1484]: time="2026-03-14T00:13:57.717479960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:13:57.719116 containerd[1484]: time="2026-03-14T00:13:57.719086800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:13:57.719299 containerd[1484]: time="2026-03-14T00:13:57.719238360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:13:57.719299 containerd[1484]: time="2026-03-14T00:13:57.719262480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:13:57.719299 containerd[1484]: time="2026-03-14T00:13:57.719276520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:13:57.719299 containerd[1484]: time="2026-03-14T00:13:57.719295080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719309280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719323120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719342840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719357560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719371400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719384480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719398400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719441080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719455880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.719483 containerd[1484]: time="2026-03-14T00:13:57.719474240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720116 containerd[1484]: time="2026-03-14T00:13:57.720079440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720165 containerd[1484]: time="2026-03-14T00:13:57.720119880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720165 containerd[1484]: time="2026-03-14T00:13:57.720139280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720165 containerd[1484]: time="2026-03-14T00:13:57.720152680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720241 containerd[1484]: time="2026-03-14T00:13:57.720168040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720241 containerd[1484]: time="2026-03-14T00:13:57.720195840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720241 containerd[1484]: time="2026-03-14T00:13:57.720217600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720241 containerd[1484]: time="2026-03-14T00:13:57.720231360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720314 containerd[1484]: time="2026-03-14T00:13:57.720245040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720314 containerd[1484]: time="2026-03-14T00:13:57.720259160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720314 containerd[1484]: time="2026-03-14T00:13:57.720283760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:13:57.720367 containerd[1484]: time="2026-03-14T00:13:57.720312680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720367 containerd[1484]: time="2026-03-14T00:13:57.720326680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.720367 containerd[1484]: time="2026-03-14T00:13:57.720350120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720471560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720516160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720529960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720543600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720553720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720570440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720585320Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:13:57.724474 containerd[1484]: time="2026-03-14T00:13:57.720595760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.720894640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.720953000Z" level=info msg="Connect containerd service" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.721057400Z" level=info msg="using legacy CRI server" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.721064520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.721158720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.721848720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.722400560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.722441160Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.722652360Z" level=info msg="Start subscribing containerd event" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723151160Z" level=info msg="Start recovering state" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723249880Z" level=info msg="Start event monitor" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723263440Z" level=info msg="Start snapshots syncer" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723278080Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723285920Z" level=info msg="Start streaming server" Mar 14 00:13:57.724842 containerd[1484]: time="2026-03-14T00:13:57.723410320Z" level=info msg="containerd successfully booted in 0.085073s" Mar 14 00:13:57.725877 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:13:57.728632 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 14 00:13:57.730701 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:13:57.731574 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:13:57.767654 systemd-networkd[1364]: eth1: Gained IPv6LL Mar 14 00:13:57.768466 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Mar 14 00:13:57.771692 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:13:57.773942 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:13:57.781772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:57.786286 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:13:57.820544 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:13:57.962345 tar[1461]: linux-arm64/README.md Mar 14 00:13:57.975212 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:13:58.279735 systemd-networkd[1364]: eth0: Gained IPv6LL Mar 14 00:13:58.281568 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Mar 14 00:13:58.504087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:58.506584 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:13:58.508307 systemd[1]: Startup finished in 757ms (kernel) + 4.882s (initrd) + 3.965s (userspace) = 9.604s. Mar 14 00:13:58.514405 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:59.004741 kubelet[1572]: E0314 00:13:59.004673 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:59.008906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:59.009136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:09.036314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:09.041893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:09.165731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:09.167371 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:09.202874 kubelet[1591]: E0314 00:14:09.202764 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:09.206236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:09.206460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:15.628333 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:15.633977 systemd[1]: Started sshd@0-168.119.153.241:22-178.208.94.76:45896.service - OpenSSH per-connection server daemon (178.208.94.76:45896). Mar 14 00:14:15.880793 sshd[1600]: Invalid user admin from 178.208.94.76 port 45896 Mar 14 00:14:15.939514 sshd[1600]: Connection closed by invalid user admin 178.208.94.76 port 45896 [preauth] Mar 14 00:14:15.943260 systemd[1]: sshd@0-168.119.153.241:22-178.208.94.76:45896.service: Deactivated successfully. Mar 14 00:14:19.290475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:14:19.299816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:19.416144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:19.424220 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:19.475751 kubelet[1612]: E0314 00:14:19.475708 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:19.478708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:19.478894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:28.323373 systemd-timesyncd[1344]: Contacted time server 195.201.125.53:123 (2.flatcar.pool.ntp.org). Mar 14 00:14:28.323472 systemd-timesyncd[1344]: Initial clock synchronization to Sat 2026-03-14 00:14:28.180522 UTC. Mar 14 00:14:29.536396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:14:29.543793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:29.676913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:29.678690 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:29.713575 kubelet[1627]: E0314 00:14:29.713480 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:29.716714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:29.716981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:36.535022 systemd[1]: Started sshd@1-168.119.153.241:22-80.94.95.115:46296.service - OpenSSH per-connection server daemon (80.94.95.115:46296). Mar 14 00:14:38.895423 sshd[1634]: Connection closed by authenticating user root 80.94.95.115 port 46296 [preauth] Mar 14 00:14:38.897961 systemd[1]: sshd@1-168.119.153.241:22-80.94.95.115:46296.service: Deactivated successfully. Mar 14 00:14:39.055195 systemd[1]: Started sshd@2-168.119.153.241:22-68.220.241.50:49126.service - OpenSSH per-connection server daemon (68.220.241.50:49126). Mar 14 00:14:39.653569 sshd[1639]: Accepted publickey for core from 68.220.241.50 port 49126 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:39.656040 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:39.669585 systemd-logind[1452]: New session 1 of user core. Mar 14 00:14:39.669755 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:39.682730 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:39.695816 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:39.707108 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:39.711103 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:39.786999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:14:39.794803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:39.823662 systemd[1643]: Queued start job for default target default.target. Mar 14 00:14:39.830258 systemd[1643]: Created slice app.slice - User Application Slice. Mar 14 00:14:39.830665 systemd[1643]: Reached target paths.target - Paths. Mar 14 00:14:39.830784 systemd[1643]: Reached target timers.target - Timers. Mar 14 00:14:39.836690 systemd[1643]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:39.859930 systemd[1643]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:39.860332 systemd[1643]: Reached target sockets.target - Sockets. Mar 14 00:14:39.860446 systemd[1643]: Reached target basic.target - Basic System. Mar 14 00:14:39.860679 systemd[1643]: Reached target default.target - Main User Target. Mar 14 00:14:39.860850 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:39.860984 systemd[1643]: Startup finished in 142ms. Mar 14 00:14:39.868738 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:39.930387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:39.934990 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:39.976896 kubelet[1660]: E0314 00:14:39.976839 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:39.980010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:39.980290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:40.307032 systemd[1]: Started sshd@3-168.119.153.241:22-68.220.241.50:49142.service - OpenSSH per-connection server daemon (68.220.241.50:49142). Mar 14 00:14:40.890085 sshd[1669]: Accepted publickey for core from 68.220.241.50 port 49142 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:40.892110 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:40.898717 systemd-logind[1452]: New session 2 of user core. Mar 14 00:14:40.904701 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:41.309076 sshd[1669]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:41.316411 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:41.316739 systemd[1]: sshd@3-168.119.153.241:22-68.220.241.50:49142.service: Deactivated successfully. Mar 14 00:14:41.319312 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:41.323923 systemd-logind[1452]: Removed session 2. Mar 14 00:14:41.424993 systemd[1]: Started sshd@4-168.119.153.241:22-68.220.241.50:49146.service - OpenSSH per-connection server daemon (68.220.241.50:49146). Mar 14 00:14:42.009508 sshd[1676]: Accepted publickey for core from 68.220.241.50 port 49146 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:42.011688 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:42.016704 systemd-logind[1452]: New session 3 of user core. Mar 14 00:14:42.022808 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:42.424856 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:42.429628 systemd[1]: sshd@4-168.119.153.241:22-68.220.241.50:49146.service: Deactivated successfully. Mar 14 00:14:42.432045 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:42.434195 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:42.435857 systemd-logind[1452]: Removed session 3. Mar 14 00:14:42.546063 systemd[1]: Started sshd@5-168.119.153.241:22-68.220.241.50:47524.service - OpenSSH per-connection server daemon (68.220.241.50:47524). Mar 14 00:14:42.904305 update_engine[1453]: I20260314 00:14:42.904179 1453 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:42.954556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1694) Mar 14 00:14:43.009550 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1697) Mar 14 00:14:43.084537 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1697) Mar 14 00:14:43.134187 sshd[1683]: Accepted publickey for core from 68.220.241.50 port 47524 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:43.136696 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:43.143091 systemd-logind[1452]: New session 4 of user core. Mar 14 00:14:43.151781 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:43.554183 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:43.560116 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:43.560247 systemd[1]: sshd@5-168.119.153.241:22-68.220.241.50:47524.service: Deactivated successfully. Mar 14 00:14:43.562247 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:43.563143 systemd-logind[1452]: Removed session 4. Mar 14 00:14:43.663925 systemd[1]: Started sshd@6-168.119.153.241:22-68.220.241.50:47530.service - OpenSSH per-connection server daemon (68.220.241.50:47530). Mar 14 00:14:44.247461 sshd[1711]: Accepted publickey for core from 68.220.241.50 port 47530 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:44.249422 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:44.256362 systemd-logind[1452]: New session 5 of user core. Mar 14 00:14:44.262830 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:44.582201 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:44.582520 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:44.602624 sudo[1714]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:44.697380 sshd[1711]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:44.702788 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:44.703551 systemd[1]: sshd@6-168.119.153.241:22-68.220.241.50:47530.service: Deactivated successfully. Mar 14 00:14:44.706097 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:44.709176 systemd-logind[1452]: Removed session 5. Mar 14 00:14:44.812269 systemd[1]: Started sshd@7-168.119.153.241:22-68.220.241.50:47546.service - OpenSSH per-connection server daemon (68.220.241.50:47546). Mar 14 00:14:45.401524 sshd[1719]: Accepted publickey for core from 68.220.241.50 port 47546 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:45.402708 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:45.408470 systemd-logind[1452]: New session 6 of user core. Mar 14 00:14:45.413782 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:45.725651 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:45.725929 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:45.729539 sudo[1723]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:45.734994 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:45.735620 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:45.748952 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:45.760548 auditctl[1726]: No rules Mar 14 00:14:45.762318 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:45.762674 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:45.770146 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:45.795269 augenrules[1744]: No rules Mar 14 00:14:45.797573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:45.799933 sudo[1722]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:45.890866 systemd[1]: Started sshd@8-168.119.153.241:22-178.208.94.76:49930.service - OpenSSH per-connection server daemon (178.208.94.76:49930). Mar 14 00:14:45.895884 sshd[1719]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:45.899712 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:45.900385 systemd[1]: sshd@7-168.119.153.241:22-68.220.241.50:47546.service: Deactivated successfully. Mar 14 00:14:45.902909 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:45.904983 systemd-logind[1452]: Removed session 6. Mar 14 00:14:46.013022 systemd[1]: Started sshd@9-168.119.153.241:22-68.220.241.50:47562.service - OpenSSH per-connection server daemon (68.220.241.50:47562). Mar 14 00:14:46.141669 sshd[1750]: Invalid user orangepi from 178.208.94.76 port 49930 Mar 14 00:14:46.200568 sshd[1750]: Connection closed by invalid user orangepi 178.208.94.76 port 49930 [preauth] Mar 14 00:14:46.202985 systemd[1]: sshd@8-168.119.153.241:22-178.208.94.76:49930.service: Deactivated successfully. Mar 14 00:14:46.599892 sshd[1755]: Accepted publickey for core from 68.220.241.50 port 47562 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:46.602143 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:46.607859 systemd-logind[1452]: New session 7 of user core. Mar 14 00:14:46.616911 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:46.924444 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:46.924759 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:47.224831 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:47.233005 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:47.476559 dockerd[1776]: time="2026-03-14T00:14:47.475257595Z" level=info msg="Starting up" Mar 14 00:14:47.554651 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2582598366-merged.mount: Deactivated successfully. Mar 14 00:14:47.569377 systemd[1]: var-lib-docker-metacopy\x2dcheck1929016540-merged.mount: Deactivated successfully. Mar 14 00:14:47.577370 dockerd[1776]: time="2026-03-14T00:14:47.577075618Z" level=info msg="Loading containers: start." Mar 14 00:14:47.675541 kernel: Initializing XFRM netlink socket Mar 14 00:14:47.761287 systemd-networkd[1364]: docker0: Link UP Mar 14 00:14:47.781124 dockerd[1776]: time="2026-03-14T00:14:47.780138429Z" level=info msg="Loading containers: done." Mar 14 00:14:47.798884 dockerd[1776]: time="2026-03-14T00:14:47.798813719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:47.799090 dockerd[1776]: time="2026-03-14T00:14:47.798953653Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:47.799137 dockerd[1776]: time="2026-03-14T00:14:47.799109961Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:47.835905 dockerd[1776]: time="2026-03-14T00:14:47.835556760Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:47.835916 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:48.311077 containerd[1484]: time="2026-03-14T00:14:48.310755448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:14:48.852111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100488851.mount: Deactivated successfully. Mar 14 00:14:49.782947 containerd[1484]: time="2026-03-14T00:14:49.782897470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:49.784985 containerd[1484]: time="2026-03-14T00:14:49.784927203Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 14 00:14:49.786109 containerd[1484]: time="2026-03-14T00:14:49.786013380Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:49.790762 containerd[1484]: time="2026-03-14T00:14:49.790734068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:49.793348 containerd[1484]: time="2026-03-14T00:14:49.792696483Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.481895497s" Mar 14 00:14:49.793348 containerd[1484]: time="2026-03-14T00:14:49.792752054Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 14 00:14:49.793639 containerd[1484]: time="2026-03-14T00:14:49.793589659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:14:50.036841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:14:50.050795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:50.168036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:50.173452 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:50.216382 kubelet[1979]: E0314 00:14:50.216132 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:50.219893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:50.220043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:50.904008 containerd[1484]: time="2026-03-14T00:14:50.903949980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:50.906195 containerd[1484]: time="2026-03-14T00:14:50.906160631Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 14 00:14:50.908508 containerd[1484]: time="2026-03-14T00:14:50.907431417Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:50.911082 containerd[1484]: time="2026-03-14T00:14:50.911054141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:50.913298 containerd[1484]: time="2026-03-14T00:14:50.913267508Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.119631824s" Mar 14 00:14:50.913415 containerd[1484]: time="2026-03-14T00:14:50.913399366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 14 00:14:50.913918 containerd[1484]: time="2026-03-14T00:14:50.913894590Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:14:51.873160 containerd[1484]: time="2026-03-14T00:14:51.873108408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:51.874399 containerd[1484]: time="2026-03-14T00:14:51.874332410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 14 00:14:51.876516 containerd[1484]: time="2026-03-14T00:14:51.875216175Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:51.878501 containerd[1484]: time="2026-03-14T00:14:51.878455032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:51.879950 containerd[1484]: time="2026-03-14T00:14:51.879910455Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 965.903505ms" Mar 14 00:14:51.880068 containerd[1484]: time="2026-03-14T00:14:51.880051402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 14 00:14:51.880601 containerd[1484]: time="2026-03-14T00:14:51.880553047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:14:52.783394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609363883.mount: Deactivated successfully. Mar 14 00:14:53.132076 containerd[1484]: time="2026-03-14T00:14:53.131993626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.133470 containerd[1484]: time="2026-03-14T00:14:53.133247078Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 14 00:14:53.134515 containerd[1484]: time="2026-03-14T00:14:53.134434059Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.137235 containerd[1484]: time="2026-03-14T00:14:53.137182989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.139421 containerd[1484]: time="2026-03-14T00:14:53.138989841Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.258286372s" Mar 14 00:14:53.139421 containerd[1484]: time="2026-03-14T00:14:53.139056873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 14 00:14:53.139873 containerd[1484]: time="2026-03-14T00:14:53.139765919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:14:53.668883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159758095.mount: Deactivated successfully. Mar 14 00:14:54.379732 containerd[1484]: time="2026-03-14T00:14:54.379660571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.381604 containerd[1484]: time="2026-03-14T00:14:54.380953072Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 14 00:14:54.383394 containerd[1484]: time="2026-03-14T00:14:54.382700166Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.389745 containerd[1484]: time="2026-03-14T00:14:54.388633808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.390582 containerd[1484]: time="2026-03-14T00:14:54.390545477Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.250678228s" Mar 14 00:14:54.390582 containerd[1484]: time="2026-03-14T00:14:54.390580934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 14 00:14:54.391015 containerd[1484]: time="2026-03-14T00:14:54.390977043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:14:54.854820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231367538.mount: Deactivated successfully. Mar 14 00:14:54.860036 containerd[1484]: time="2026-03-14T00:14:54.859987793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.860873 containerd[1484]: time="2026-03-14T00:14:54.860837095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 14 00:14:54.861989 containerd[1484]: time="2026-03-14T00:14:54.861723853Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.864089 containerd[1484]: time="2026-03-14T00:14:54.863990897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.865680 containerd[1484]: time="2026-03-14T00:14:54.864972236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 473.963933ms" Mar 14 00:14:54.865680 containerd[1484]: time="2026-03-14T00:14:54.865005175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 14 00:14:54.865680 containerd[1484]: time="2026-03-14T00:14:54.865469321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:14:55.372360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103136940.mount: Deactivated successfully. Mar 14 00:14:56.066176 containerd[1484]: time="2026-03-14T00:14:56.066109648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.067413 containerd[1484]: time="2026-03-14T00:14:56.067370197Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 14 00:14:56.069192 containerd[1484]: time="2026-03-14T00:14:56.068441757Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.076800 containerd[1484]: time="2026-03-14T00:14:56.076752848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.078274 containerd[1484]: time="2026-03-14T00:14:56.078235769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.212740062s" Mar 14 00:14:56.078399 containerd[1484]: time="2026-03-14T00:14:56.078379859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 14 00:15:00.285765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:15:00.295736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:00.427165 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:00.428666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:00.468792 kubelet[2150]: E0314 00:15:00.468742 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:00.472269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:00.472400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:01.402388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:01.419079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:01.460783 systemd[1]: Reloading requested from client PID 2165 ('systemctl') (unit session-7.scope)... Mar 14 00:15:01.460800 systemd[1]: Reloading... Mar 14 00:15:01.576528 zram_generator::config[2214]: No configuration found. Mar 14 00:15:01.663890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:01.733670 systemd[1]: Reloading finished in 272 ms. Mar 14 00:15:01.783603 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:15:01.784002 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:15:01.784908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:01.794987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:01.924300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:01.937066 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:15:01.985636 kubelet[2252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:01.985636 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:15:01.985636 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:01.986018 kubelet[2252]: I0314 00:15:01.985683 2252 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:15:03.205031 kubelet[2252]: I0314 00:15:03.204951 2252 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:15:03.205031 kubelet[2252]: I0314 00:15:03.204991 2252 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:15:03.205719 kubelet[2252]: I0314 00:15:03.205245 2252 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:15:03.235557 kubelet[2252]: E0314 00:15:03.235447 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.153.241:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:15:03.239644 kubelet[2252]: I0314 00:15:03.239457 2252 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:15:03.250630 kubelet[2252]: E0314 00:15:03.250549 2252 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:15:03.250837 kubelet[2252]: I0314 00:15:03.250820 2252 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:15:03.254395 kubelet[2252]: I0314 00:15:03.254363 2252 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:15:03.257170 kubelet[2252]: I0314 00:15:03.256592 2252 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:15:03.257170 kubelet[2252]: I0314 00:15:03.256639 2252 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-c13e9e2860","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:15:03.257170 kubelet[2252]: I0314 00:15:03.256798 2252 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:15:03.257170 kubelet[2252]: I0314 00:15:03.256815 2252 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:15:03.257170 kubelet[2252]: I0314 00:15:03.257041 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:03.261555 kubelet[2252]: I0314 00:15:03.261520 2252 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:15:03.261730 kubelet[2252]: I0314 00:15:03.261716 2252 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:15:03.261818 kubelet[2252]: I0314 00:15:03.261807 2252 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:15:03.261891 kubelet[2252]: I0314 00:15:03.261881 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:15:03.267552 kubelet[2252]: E0314 00:15:03.267513 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.153.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-c13e9e2860&limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:15:03.268228 kubelet[2252]: E0314 00:15:03.268015 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.153.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:15:03.270358 kubelet[2252]: I0314 00:15:03.268461 2252 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:15:03.270358 kubelet[2252]: I0314 00:15:03.269520 2252 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:15:03.270358 kubelet[2252]: W0314 00:15:03.269676 2252 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:15:03.274243 kubelet[2252]: I0314 00:15:03.274223 2252 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:15:03.274401 kubelet[2252]: I0314 00:15:03.274390 2252 server.go:1289] "Started kubelet" Mar 14 00:15:03.275656 kubelet[2252]: I0314 00:15:03.275634 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:15:03.280641 kubelet[2252]: E0314 00:15:03.279269 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.153.241:6443/api/v1/namespaces/default/events\": dial tcp 168.119.153.241:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-c13e9e2860.189c8ced1b8aba8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-c13e9e2860,UID:ci-4081-3-6-n-c13e9e2860,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-c13e9e2860,},FirstTimestamp:2026-03-14 00:15:03.274347146 +0000 UTC m=+1.330370371,LastTimestamp:2026-03-14 00:15:03.274347146 +0000 UTC m=+1.330370371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-c13e9e2860,}" Mar 14 00:15:03.282914 kubelet[2252]: E0314 00:15:03.282874 2252 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:15:03.283355 kubelet[2252]: I0314 00:15:03.283318 2252 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:15:03.284273 kubelet[2252]: I0314 00:15:03.284242 2252 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:15:03.284690 kubelet[2252]: I0314 00:15:03.284672 2252 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:15:03.285100 kubelet[2252]: E0314 00:15:03.285075 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:03.287737 kubelet[2252]: I0314 00:15:03.287676 2252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:15:03.287935 kubelet[2252]: I0314 00:15:03.287912 2252 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:15:03.288342 kubelet[2252]: I0314 00:15:03.288327 2252 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:15:03.288437 kubelet[2252]: I0314 00:15:03.288407 2252 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:15:03.288526 kubelet[2252]: I0314 00:15:03.288516 2252 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:15:03.290977 kubelet[2252]: I0314 00:15:03.290945 2252 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:15:03.291377 kubelet[2252]: I0314 00:15:03.291343 2252 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:15:03.291959 kubelet[2252]: E0314 00:15:03.291747 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.153.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-c13e9e2860?timeout=10s\": dial tcp 168.119.153.241:6443: connect: connection refused" interval="200ms" Mar 14 00:15:03.296322 kubelet[2252]: E0314 00:15:03.293219 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.153.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:15:03.296322 kubelet[2252]: I0314 00:15:03.293839 2252 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:15:03.309283 kubelet[2252]: I0314 00:15:03.309155 2252 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:15:03.314416 kubelet[2252]: I0314 00:15:03.314336 2252 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:15:03.314416 kubelet[2252]: I0314 00:15:03.314414 2252 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:15:03.314592 kubelet[2252]: I0314 00:15:03.314436 2252 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:15:03.314592 kubelet[2252]: I0314 00:15:03.314444 2252 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:15:03.314592 kubelet[2252]: E0314 00:15:03.314482 2252 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:15:03.318353 kubelet[2252]: E0314 00:15:03.318321 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.153.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:15:03.318682 kubelet[2252]: I0314 00:15:03.318664 2252 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:15:03.318759 kubelet[2252]: I0314 00:15:03.318749 2252 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:15:03.318815 kubelet[2252]: I0314 00:15:03.318807 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:03.320961 kubelet[2252]: I0314 00:15:03.320945 2252 policy_none.go:49] "None policy: Start" Mar 14 00:15:03.321076 kubelet[2252]: I0314 00:15:03.321066 2252 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:15:03.321146 kubelet[2252]: I0314 00:15:03.321138 2252 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:15:03.326757 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:15:03.338457 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:15:03.342480 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:15:03.352893 kubelet[2252]: E0314 00:15:03.352826 2252 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:15:03.353170 kubelet[2252]: I0314 00:15:03.353117 2252 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:15:03.353170 kubelet[2252]: I0314 00:15:03.353142 2252 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:15:03.353990 kubelet[2252]: I0314 00:15:03.353764 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:15:03.354917 kubelet[2252]: E0314 00:15:03.354899 2252 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:15:03.355195 kubelet[2252]: E0314 00:15:03.355112 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:03.431868 systemd[1]: Created slice kubepods-burstable-pod866b5f3c6841f8fcd8a93be140891ea9.slice - libcontainer container kubepods-burstable-pod866b5f3c6841f8fcd8a93be140891ea9.slice. Mar 14 00:15:03.453479 kubelet[2252]: E0314 00:15:03.453214 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.457819 kubelet[2252]: I0314 00:15:03.457373 2252 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.457915 kubelet[2252]: E0314 00:15:03.457884 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.153.241:6443/api/v1/nodes\": dial tcp 168.119.153.241:6443: connect: connection refused" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.460611 systemd[1]: Created slice kubepods-burstable-pod5f5acf374feab9cbf566c936ab3daefc.slice - libcontainer container kubepods-burstable-pod5f5acf374feab9cbf566c936ab3daefc.slice. Mar 14 00:15:03.463580 kubelet[2252]: E0314 00:15:03.463524 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.468463 systemd[1]: Created slice kubepods-burstable-pod42741dc7b319f1e0a0e9cfc197092d3e.slice - libcontainer container kubepods-burstable-pod42741dc7b319f1e0a0e9cfc197092d3e.slice. Mar 14 00:15:03.471422 kubelet[2252]: E0314 00:15:03.471358 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.494918 kubelet[2252]: E0314 00:15:03.494059 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.153.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-c13e9e2860?timeout=10s\": dial tcp 168.119.153.241:6443: connect: connection refused" interval="400ms" Mar 14 00:15:03.590362 kubelet[2252]: I0314 00:15:03.589972 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590362 kubelet[2252]: I0314 00:15:03.590060 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590362 kubelet[2252]: I0314 00:15:03.590103 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590362 kubelet[2252]: I0314 00:15:03.590130 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/866b5f3c6841f8fcd8a93be140891ea9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-c13e9e2860\" (UID: \"866b5f3c6841f8fcd8a93be140891ea9\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590362 kubelet[2252]: I0314 00:15:03.590155 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590710 kubelet[2252]: I0314 00:15:03.590179 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590710 kubelet[2252]: I0314 00:15:03.590204 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590710 kubelet[2252]: I0314 00:15:03.590227 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.590710 kubelet[2252]: I0314 00:15:03.590256 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.662140 kubelet[2252]: I0314 00:15:03.661610 2252 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.662562 kubelet[2252]: E0314 00:15:03.662525 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.153.241:6443/api/v1/nodes\": dial tcp 168.119.153.241:6443: connect: connection refused" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:03.757030 containerd[1484]: time="2026-03-14T00:15:03.756153067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-c13e9e2860,Uid:866b5f3c6841f8fcd8a93be140891ea9,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:03.765805 containerd[1484]: time="2026-03-14T00:15:03.765721185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-c13e9e2860,Uid:5f5acf374feab9cbf566c936ab3daefc,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:03.773269 containerd[1484]: time="2026-03-14T00:15:03.773194224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-c13e9e2860,Uid:42741dc7b319f1e0a0e9cfc197092d3e,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:03.895535 kubelet[2252]: E0314 00:15:03.895426 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.153.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-c13e9e2860?timeout=10s\": dial tcp 168.119.153.241:6443: connect: connection refused" interval="800ms" Mar 14 00:15:04.065191 kubelet[2252]: I0314 00:15:04.064876 2252 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:04.065510 kubelet[2252]: E0314 00:15:04.065380 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.153.241:6443/api/v1/nodes\": dial tcp 168.119.153.241:6443: connect: connection refused" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:04.211213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415438857.mount: Deactivated successfully. Mar 14 00:15:04.219575 containerd[1484]: time="2026-03-14T00:15:04.219514673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:04.220870 containerd[1484]: time="2026-03-14T00:15:04.220798193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:04.222127 containerd[1484]: time="2026-03-14T00:15:04.222094072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:04.223296 containerd[1484]: time="2026-03-14T00:15:04.222355512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:04.223296 containerd[1484]: time="2026-03-14T00:15:04.222448752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:04.223838 containerd[1484]: time="2026-03-14T00:15:04.223801632Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:04.224961 containerd[1484]: time="2026-03-14T00:15:04.224869272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 14 00:15:04.226721 containerd[1484]: time="2026-03-14T00:15:04.226633512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:04.229050 containerd[1484]: time="2026-03-14T00:15:04.228804071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.487047ms" Mar 14 00:15:04.230920 containerd[1484]: time="2026-03-14T00:15:04.230873871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.035166ms" Mar 14 00:15:04.234852 containerd[1484]: time="2026-03-14T00:15:04.234625470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.363003ms" Mar 14 00:15:04.257025 kubelet[2252]: E0314 00:15:04.256976 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.153.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-c13e9e2860&limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.372991609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.373168369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.373237529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.372971049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.373058609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.373074089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.373721 containerd[1484]: time="2026-03-14T00:15:04.373161609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.378714 containerd[1484]: time="2026-03-14T00:15:04.373461369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.378828 containerd[1484]: time="2026-03-14T00:15:04.378653688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:04.378828 containerd[1484]: time="2026-03-14T00:15:04.378709408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:04.378828 containerd[1484]: time="2026-03-14T00:15:04.378729288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.378921 containerd[1484]: time="2026-03-14T00:15:04.378815128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:04.401697 systemd[1]: Started cri-containerd-a7e9cc1fed729c7ea85962eb4944806465a6aaa6d02b66d257dabeea5fae5d86.scope - libcontainer container a7e9cc1fed729c7ea85962eb4944806465a6aaa6d02b66d257dabeea5fae5d86. Mar 14 00:15:04.418798 systemd[1]: Started cri-containerd-4d606b482ac3741010711b738745f14f479e9c49b2cb152cc286b5b85f34ed25.scope - libcontainer container 4d606b482ac3741010711b738745f14f479e9c49b2cb152cc286b5b85f34ed25. Mar 14 00:15:04.422681 systemd[1]: Started cri-containerd-eecee0dd8629d029b1668643599e119532054de097ec188285c451d05b843bba.scope - libcontainer container eecee0dd8629d029b1668643599e119532054de097ec188285c451d05b843bba. Mar 14 00:15:04.475234 containerd[1484]: time="2026-03-14T00:15:04.475099833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-c13e9e2860,Uid:5f5acf374feab9cbf566c936ab3daefc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7e9cc1fed729c7ea85962eb4944806465a6aaa6d02b66d257dabeea5fae5d86\"" Mar 14 00:15:04.482927 containerd[1484]: time="2026-03-14T00:15:04.482818592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-c13e9e2860,Uid:42741dc7b319f1e0a0e9cfc197092d3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d606b482ac3741010711b738745f14f479e9c49b2cb152cc286b5b85f34ed25\"" Mar 14 00:15:04.487073 containerd[1484]: time="2026-03-14T00:15:04.486914031Z" level=info msg="CreateContainer within sandbox \"a7e9cc1fed729c7ea85962eb4944806465a6aaa6d02b66d257dabeea5fae5d86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:15:04.488086 containerd[1484]: time="2026-03-14T00:15:04.487991071Z" level=info msg="CreateContainer within sandbox \"4d606b482ac3741010711b738745f14f479e9c49b2cb152cc286b5b85f34ed25\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:15:04.498457 containerd[1484]: time="2026-03-14T00:15:04.497985949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-c13e9e2860,Uid:866b5f3c6841f8fcd8a93be140891ea9,Namespace:kube-system,Attempt:0,} returns sandbox id \"eecee0dd8629d029b1668643599e119532054de097ec188285c451d05b843bba\"" Mar 14 00:15:04.502835 kubelet[2252]: E0314 00:15:04.502795 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.153.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:15:04.503515 containerd[1484]: time="2026-03-14T00:15:04.503396189Z" level=info msg="CreateContainer within sandbox \"eecee0dd8629d029b1668643599e119532054de097ec188285c451d05b843bba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:15:04.506587 containerd[1484]: time="2026-03-14T00:15:04.506537108Z" level=info msg="CreateContainer within sandbox \"4d606b482ac3741010711b738745f14f479e9c49b2cb152cc286b5b85f34ed25\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"336c03d2fd1c85bd0f52272e3cb67fe5709c70033dbf61249153f917f7c1b282\"" Mar 14 00:15:04.507675 containerd[1484]: time="2026-03-14T00:15:04.507644708Z" level=info msg="StartContainer for \"336c03d2fd1c85bd0f52272e3cb67fe5709c70033dbf61249153f917f7c1b282\"" Mar 14 00:15:04.508809 containerd[1484]: time="2026-03-14T00:15:04.508641068Z" level=info msg="CreateContainer within sandbox \"a7e9cc1fed729c7ea85962eb4944806465a6aaa6d02b66d257dabeea5fae5d86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb94c35d97201d05dcb3da7d2531ff44bf4fb2a0364057e869e5a108fbfac25f\"" Mar 14 00:15:04.509402 containerd[1484]: time="2026-03-14T00:15:04.509369428Z" level=info msg="StartContainer for \"fb94c35d97201d05dcb3da7d2531ff44bf4fb2a0364057e869e5a108fbfac25f\"" Mar 14 00:15:04.524783 containerd[1484]: time="2026-03-14T00:15:04.524642865Z" level=info msg="CreateContainer within sandbox \"eecee0dd8629d029b1668643599e119532054de097ec188285c451d05b843bba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"089460e0c6234b8c490291d4a8ff2d721b2f51266aac5d74e26d9a4f7cf9d1e0\"" Mar 14 00:15:04.525723 containerd[1484]: time="2026-03-14T00:15:04.525448545Z" level=info msg="StartContainer for \"089460e0c6234b8c490291d4a8ff2d721b2f51266aac5d74e26d9a4f7cf9d1e0\"" Mar 14 00:15:04.529046 kubelet[2252]: E0314 00:15:04.528971 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.153.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.153.241:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:15:04.542020 systemd[1]: Started cri-containerd-336c03d2fd1c85bd0f52272e3cb67fe5709c70033dbf61249153f917f7c1b282.scope - libcontainer container 336c03d2fd1c85bd0f52272e3cb67fe5709c70033dbf61249153f917f7c1b282. Mar 14 00:15:04.551216 systemd[1]: Started cri-containerd-fb94c35d97201d05dcb3da7d2531ff44bf4fb2a0364057e869e5a108fbfac25f.scope - libcontainer container fb94c35d97201d05dcb3da7d2531ff44bf4fb2a0364057e869e5a108fbfac25f. Mar 14 00:15:04.565929 systemd[1]: Started cri-containerd-089460e0c6234b8c490291d4a8ff2d721b2f51266aac5d74e26d9a4f7cf9d1e0.scope - libcontainer container 089460e0c6234b8c490291d4a8ff2d721b2f51266aac5d74e26d9a4f7cf9d1e0. Mar 14 00:15:04.617941 containerd[1484]: time="2026-03-14T00:15:04.617784091Z" level=info msg="StartContainer for \"089460e0c6234b8c490291d4a8ff2d721b2f51266aac5d74e26d9a4f7cf9d1e0\" returns successfully" Mar 14 00:15:04.626640 containerd[1484]: time="2026-03-14T00:15:04.625642810Z" level=info msg="StartContainer for \"336c03d2fd1c85bd0f52272e3cb67fe5709c70033dbf61249153f917f7c1b282\" returns successfully" Mar 14 00:15:04.626640 containerd[1484]: time="2026-03-14T00:15:04.625726050Z" level=info msg="StartContainer for \"fb94c35d97201d05dcb3da7d2531ff44bf4fb2a0364057e869e5a108fbfac25f\" returns successfully" Mar 14 00:15:04.696923 kubelet[2252]: E0314 00:15:04.696675 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.153.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-c13e9e2860?timeout=10s\": dial tcp 168.119.153.241:6443: connect: connection refused" interval="1.6s" Mar 14 00:15:04.867443 kubelet[2252]: I0314 00:15:04.867411 2252 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:05.332521 kubelet[2252]: E0314 00:15:05.331808 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:05.342507 kubelet[2252]: E0314 00:15:05.341470 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:05.348619 kubelet[2252]: E0314 00:15:05.348594 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:06.351177 kubelet[2252]: E0314 00:15:06.351134 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:06.352053 kubelet[2252]: E0314 00:15:06.351799 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:06.613754 kubelet[2252]: E0314 00:15:06.613387 2252 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-c13e9e2860\" not found" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:06.682224 kubelet[2252]: I0314 00:15:06.682014 2252 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:06.682224 kubelet[2252]: E0314 00:15:06.682056 2252 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-c13e9e2860\": node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:06.699611 kubelet[2252]: E0314 00:15:06.699579 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:06.800079 kubelet[2252]: E0314 00:15:06.800038 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:06.901350 kubelet[2252]: E0314 00:15:06.900746 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:07.001124 kubelet[2252]: E0314 00:15:07.001077 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:07.101588 kubelet[2252]: E0314 00:15:07.101542 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:07.202536 kubelet[2252]: E0314 00:15:07.202156 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:07.271300 kubelet[2252]: I0314 00:15:07.271054 2252 apiserver.go:52] "Watching apiserver" Mar 14 00:15:07.286630 kubelet[2252]: I0314 00:15:07.286589 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.288955 kubelet[2252]: I0314 00:15:07.288917 2252 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:15:07.293683 kubelet[2252]: E0314 00:15:07.293653 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-c13e9e2860\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.293683 kubelet[2252]: I0314 00:15:07.293682 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.295441 kubelet[2252]: E0314 00:15:07.295409 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.295441 kubelet[2252]: I0314 00:15:07.295432 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.297828 kubelet[2252]: E0314 00:15:07.297800 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:07.349525 kubelet[2252]: I0314 00:15:07.348792 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:09.593696 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-7.scope)... Mar 14 00:15:09.593718 systemd[1]: Reloading... Mar 14 00:15:09.702527 zram_generator::config[2582]: No configuration found. Mar 14 00:15:09.804266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:09.885852 systemd[1]: Reloading finished in 291 ms. Mar 14 00:15:09.925545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:09.943479 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:15:09.943925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:09.944025 systemd[1]: kubelet.service: Consumed 1.733s CPU time, 129.7M memory peak, 0B memory swap peak. Mar 14 00:15:09.954197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:10.094816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:10.094904 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:15:10.142815 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:10.142815 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:15:10.142815 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:10.142815 kubelet[2624]: I0314 00:15:10.141352 2624 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:15:10.147194 kubelet[2624]: I0314 00:15:10.147161 2624 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:15:10.147194 kubelet[2624]: I0314 00:15:10.147186 2624 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:15:10.147418 kubelet[2624]: I0314 00:15:10.147377 2624 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:15:10.148735 kubelet[2624]: I0314 00:15:10.148704 2624 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:15:10.153255 kubelet[2624]: I0314 00:15:10.153044 2624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:15:10.157649 kubelet[2624]: E0314 00:15:10.157615 2624 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:15:10.157649 kubelet[2624]: I0314 00:15:10.157642 2624 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:15:10.162281 kubelet[2624]: I0314 00:15:10.161027 2624 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:15:10.162281 kubelet[2624]: I0314 00:15:10.161234 2624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:15:10.162281 kubelet[2624]: I0314 00:15:10.161261 2624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-c13e9e2860","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:15:10.162281 kubelet[2624]: I0314 00:15:10.161409 2624 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161418 2624 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161470 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161649 2624 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161664 2624 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161688 2624 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:15:10.162721 kubelet[2624]: I0314 00:15:10.161701 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:15:10.167247 kubelet[2624]: I0314 00:15:10.167191 2624 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:15:10.168841 kubelet[2624]: I0314 00:15:10.168082 2624 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:15:10.173461 kubelet[2624]: I0314 00:15:10.173441 2624 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:15:10.173650 kubelet[2624]: I0314 00:15:10.173640 2624 server.go:1289] "Started kubelet" Mar 14 00:15:10.179645 kubelet[2624]: I0314 00:15:10.179506 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:15:10.186259 kubelet[2624]: I0314 00:15:10.185724 2624 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:15:10.189831 kubelet[2624]: I0314 00:15:10.189777 2624 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:15:10.196615 kubelet[2624]: I0314 00:15:10.196548 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:15:10.196927 kubelet[2624]: I0314 00:15:10.196772 2624 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:15:10.197023 kubelet[2624]: I0314 00:15:10.197007 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:15:10.199546 kubelet[2624]: I0314 00:15:10.197965 2624 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:15:10.199546 kubelet[2624]: E0314 00:15:10.198192 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-c13e9e2860\" not found" Mar 14 00:15:10.199546 kubelet[2624]: I0314 00:15:10.198711 2624 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:15:10.199546 kubelet[2624]: I0314 00:15:10.198816 2624 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:15:10.211443 kubelet[2624]: I0314 00:15:10.211396 2624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:15:10.214157 kubelet[2624]: I0314 00:15:10.214134 2624 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:15:10.214280 kubelet[2624]: I0314 00:15:10.214270 2624 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:15:10.222729 kubelet[2624]: E0314 00:15:10.222705 2624 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:15:10.231418 kubelet[2624]: I0314 00:15:10.231112 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:15:10.235367 kubelet[2624]: I0314 00:15:10.234596 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:15:10.235367 kubelet[2624]: I0314 00:15:10.234622 2624 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:15:10.235367 kubelet[2624]: I0314 00:15:10.234647 2624 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:15:10.235367 kubelet[2624]: I0314 00:15:10.234654 2624 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:15:10.235367 kubelet[2624]: E0314 00:15:10.234692 2624 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:15:10.271509 kubelet[2624]: I0314 00:15:10.271457 2624 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:15:10.271509 kubelet[2624]: I0314 00:15:10.271474 2624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:15:10.271509 kubelet[2624]: I0314 00:15:10.271506 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:15:10.271750 kubelet[2624]: I0314 00:15:10.271635 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:15:10.271750 kubelet[2624]: I0314 00:15:10.271648 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:15:10.271750 kubelet[2624]: I0314 00:15:10.271663 2624 policy_none.go:49] "None policy: Start" Mar 14 00:15:10.271750 kubelet[2624]: I0314 00:15:10.271671 2624 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:15:10.271750 kubelet[2624]: I0314 00:15:10.271679 2624 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:15:10.272026 kubelet[2624]: I0314 00:15:10.271758 2624 state_mem.go:75] "Updated machine memory state" Mar 14 00:15:10.276235 kubelet[2624]: E0314 00:15:10.276208 2624 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:15:10.276718 kubelet[2624]: I0314 00:15:10.276366 2624 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:15:10.276718 kubelet[2624]: I0314 00:15:10.276382 2624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:15:10.276718 kubelet[2624]: I0314 00:15:10.276625 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:15:10.279630 kubelet[2624]: E0314 00:15:10.279586 2624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:15:10.336098 kubelet[2624]: I0314 00:15:10.336019 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.336822 kubelet[2624]: I0314 00:15:10.336214 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.336822 kubelet[2624]: I0314 00:15:10.336682 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.347368 kubelet[2624]: E0314 00:15:10.347278 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-c13e9e2860\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.379802 kubelet[2624]: I0314 00:15:10.379771 2624 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.398941 kubelet[2624]: I0314 00:15:10.398739 2624 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.398941 kubelet[2624]: I0314 00:15:10.398883 2624 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401154 kubelet[2624]: I0314 00:15:10.399885 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401154 kubelet[2624]: I0314 00:15:10.399921 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401154 kubelet[2624]: I0314 00:15:10.400005 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401154 kubelet[2624]: I0314 00:15:10.400044 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401154 kubelet[2624]: I0314 00:15:10.400095 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f5acf374feab9cbf566c936ab3daefc-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" (UID: \"5f5acf374feab9cbf566c936ab3daefc\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401342 kubelet[2624]: I0314 00:15:10.400113 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401342 kubelet[2624]: I0314 00:15:10.400130 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401342 kubelet[2624]: I0314 00:15:10.400147 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42741dc7b319f1e0a0e9cfc197092d3e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-c13e9e2860\" (UID: \"42741dc7b319f1e0a0e9cfc197092d3e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.401342 kubelet[2624]: I0314 00:15:10.400165 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/866b5f3c6841f8fcd8a93be140891ea9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-c13e9e2860\" (UID: \"866b5f3c6841f8fcd8a93be140891ea9\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:10.592979 sudo[2661]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:15:10.593288 sudo[2661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:15:11.088781 sudo[2661]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:11.164879 kubelet[2624]: I0314 00:15:11.164841 2624 apiserver.go:52] "Watching apiserver" Mar 14 00:15:11.199832 kubelet[2624]: I0314 00:15:11.199772 2624 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:15:11.251769 kubelet[2624]: I0314 00:15:11.251700 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:11.252150 kubelet[2624]: I0314 00:15:11.252123 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:11.261723 kubelet[2624]: E0314 00:15:11.261685 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-c13e9e2860\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:11.262694 kubelet[2624]: E0314 00:15:11.262625 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-c13e9e2860\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" Mar 14 00:15:11.290899 kubelet[2624]: I0314 00:15:11.290688 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-c13e9e2860" podStartSLOduration=1.290671707 podStartE2EDuration="1.290671707s" podCreationTimestamp="2026-03-14 00:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:11.277655229 +0000 UTC m=+1.178723111" watchObservedRunningTime="2026-03-14 00:15:11.290671707 +0000 UTC m=+1.191739589" Mar 14 00:15:11.290899 kubelet[2624]: I0314 00:15:11.290816 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-c13e9e2860" podStartSLOduration=4.290811107 podStartE2EDuration="4.290811107s" podCreationTimestamp="2026-03-14 00:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:11.290410627 +0000 UTC m=+1.191478509" watchObservedRunningTime="2026-03-14 00:15:11.290811107 +0000 UTC m=+1.191878989" Mar 14 00:15:11.320895 kubelet[2624]: I0314 00:15:11.320294 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-c13e9e2860" podStartSLOduration=1.320066704 podStartE2EDuration="1.320066704s" podCreationTimestamp="2026-03-14 00:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:11.302224266 +0000 UTC m=+1.203292148" watchObservedRunningTime="2026-03-14 00:15:11.320066704 +0000 UTC m=+1.221134746" Mar 14 00:15:13.246259 sudo[1760]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:13.340051 sshd[1755]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:13.347098 systemd[1]: sshd@9-168.119.153.241:22-68.220.241.50:47562.service: Deactivated successfully. Mar 14 00:15:13.351377 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:15:13.353655 systemd[1]: session-7.scope: Consumed 8.174s CPU time, 154.1M memory peak, 0B memory swap peak. Mar 14 00:15:13.354861 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:15:13.356770 systemd-logind[1452]: Removed session 7. Mar 14 00:15:13.955716 kubelet[2624]: I0314 00:15:13.955669 2624 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:15:13.957355 containerd[1484]: time="2026-03-14T00:15:13.957292759Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:15:13.958686 kubelet[2624]: I0314 00:15:13.957638 2624 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:15:14.966576 kubelet[2624]: I0314 00:15:14.966310 2624 status_manager.go:895] "Failed to get status for pod" podUID="30a9224a-9df1-4d1c-9e70-f4d70fa9dd51" pod="kube-system/kube-proxy-p8xsx" err="pods \"kube-proxy-p8xsx\" is forbidden: User \"system:node:ci-4081-3-6-n-c13e9e2860\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-c13e9e2860' and this object" Mar 14 00:15:14.966576 kubelet[2624]: E0314 00:15:14.966523 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-6-n-c13e9e2860\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-c13e9e2860' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Mar 14 00:15:14.966911 kubelet[2624]: E0314 00:15:14.966683 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-c13e9e2860\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-c13e9e2860' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Mar 14 00:15:14.968122 systemd[1]: Created slice kubepods-besteffort-pod30a9224a_9df1_4d1c_9e70_f4d70fa9dd51.slice - libcontainer container kubepods-besteffort-pod30a9224a_9df1_4d1c_9e70_f4d70fa9dd51.slice. Mar 14 00:15:14.985393 systemd[1]: Created slice kubepods-burstable-pod45336a9c_2c47_4ea2_91a1_ceecce85a2d1.slice - libcontainer container kubepods-burstable-pod45336a9c_2c47_4ea2_91a1_ceecce85a2d1.slice. Mar 14 00:15:15.033025 kubelet[2624]: I0314 00:15:15.032981 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-lib-modules\") pod \"kube-proxy-p8xsx\" (UID: \"30a9224a-9df1-4d1c-9e70-f4d70fa9dd51\") " pod="kube-system/kube-proxy-p8xsx" Mar 14 00:15:15.033025 kubelet[2624]: I0314 00:15:15.033020 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4m4\" (UniqueName: \"kubernetes.io/projected/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-kube-api-access-gx4m4\") pod \"kube-proxy-p8xsx\" (UID: \"30a9224a-9df1-4d1c-9e70-f4d70fa9dd51\") " pod="kube-system/kube-proxy-p8xsx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033041 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-run\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033065 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-cgroup\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033079 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-etc-cni-netd\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033093 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-xtables-lock\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033110 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-xtables-lock\") pod \"kube-proxy-p8xsx\" (UID: \"30a9224a-9df1-4d1c-9e70-f4d70fa9dd51\") " pod="kube-system/kube-proxy-p8xsx" Mar 14 00:15:15.033191 kubelet[2624]: I0314 00:15:15.033125 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hostproc\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033138 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-clustermesh-secrets\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033151 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-config-path\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033164 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-bpf-maps\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033177 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cni-path\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033190 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-lib-modules\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033331 kubelet[2624]: I0314 00:15:15.033205 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-net\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033452 kubelet[2624]: I0314 00:15:15.033226 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-kernel\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033452 kubelet[2624]: I0314 00:15:15.033241 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-kube-proxy\") pod \"kube-proxy-p8xsx\" (UID: \"30a9224a-9df1-4d1c-9e70-f4d70fa9dd51\") " pod="kube-system/kube-proxy-p8xsx" Mar 14 00:15:15.033452 kubelet[2624]: I0314 00:15:15.033256 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hubble-tls\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.033452 kubelet[2624]: I0314 00:15:15.033274 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx486\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-kube-api-access-tx486\") pod \"cilium-xpxdx\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " pod="kube-system/cilium-xpxdx" Mar 14 00:15:15.173557 systemd[1]: Created slice kubepods-besteffort-pode9815b12_f48c_4978_9810_3e75217b21ac.slice - libcontainer container kubepods-besteffort-pode9815b12_f48c_4978_9810_3e75217b21ac.slice. Mar 14 00:15:15.235181 kubelet[2624]: I0314 00:15:15.235007 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsjk\" (UniqueName: \"kubernetes.io/projected/e9815b12-f48c-4978-9810-3e75217b21ac-kube-api-access-fvsjk\") pod \"cilium-operator-6c4d7847fc-7l4vz\" (UID: \"e9815b12-f48c-4978-9810-3e75217b21ac\") " pod="kube-system/cilium-operator-6c4d7847fc-7l4vz" Mar 14 00:15:15.235181 kubelet[2624]: I0314 00:15:15.235117 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9815b12-f48c-4978-9810-3e75217b21ac-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7l4vz\" (UID: \"e9815b12-f48c-4978-9810-3e75217b21ac\") " pod="kube-system/cilium-operator-6c4d7847fc-7l4vz" Mar 14 00:15:16.079289 containerd[1484]: time="2026-03-14T00:15:16.078686330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7l4vz,Uid:e9815b12-f48c-4978-9810-3e75217b21ac,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:16.105371 containerd[1484]: time="2026-03-14T00:15:16.104912928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:16.105371 containerd[1484]: time="2026-03-14T00:15:16.105099488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:16.105700 containerd[1484]: time="2026-03-14T00:15:16.105213688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.105969 containerd[1484]: time="2026-03-14T00:15:16.105809688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.129808 systemd[1]: Started cri-containerd-c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1.scope - libcontainer container c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1. Mar 14 00:15:16.136035 kubelet[2624]: E0314 00:15:16.135976 2624 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 14 00:15:16.136362 kubelet[2624]: E0314 00:15:16.136121 2624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-kube-proxy podName:30a9224a-9df1-4d1c-9e70-f4d70fa9dd51 nodeName:}" failed. No retries permitted until 2026-03-14 00:15:16.636086285 +0000 UTC m=+6.537154207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/30a9224a-9df1-4d1c-9e70-f4d70fa9dd51-kube-proxy") pod "kube-proxy-p8xsx" (UID: "30a9224a-9df1-4d1c-9e70-f4d70fa9dd51") : failed to sync configmap cache: timed out waiting for the condition Mar 14 00:15:16.171163 containerd[1484]: time="2026-03-14T00:15:16.170430282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7l4vz,Uid:e9815b12-f48c-4978-9810-3e75217b21ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\"" Mar 14 00:15:16.173558 containerd[1484]: time="2026-03-14T00:15:16.173430762Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:15:16.189334 containerd[1484]: time="2026-03-14T00:15:16.189225601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpxdx,Uid:45336a9c-2c47-4ea2-91a1-ceecce85a2d1,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:16.212714 containerd[1484]: time="2026-03-14T00:15:16.212561599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:16.213600 containerd[1484]: time="2026-03-14T00:15:16.213283199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:16.213600 containerd[1484]: time="2026-03-14T00:15:16.213376559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.213854 containerd[1484]: time="2026-03-14T00:15:16.213802919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.234706 systemd[1]: Started cri-containerd-fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12.scope - libcontainer container fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12. Mar 14 00:15:16.260165 systemd[1]: Started sshd@10-168.119.153.241:22-178.208.94.76:39150.service - OpenSSH per-connection server daemon (178.208.94.76:39150). Mar 14 00:15:16.278900 containerd[1484]: time="2026-03-14T00:15:16.278830073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpxdx,Uid:45336a9c-2c47-4ea2-91a1-ceecce85a2d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\"" Mar 14 00:15:16.580180 sshd[2780]: Connection closed by authenticating user root 178.208.94.76 port 39150 [preauth] Mar 14 00:15:16.583403 systemd[1]: sshd@10-168.119.153.241:22-178.208.94.76:39150.service: Deactivated successfully. Mar 14 00:15:16.779479 containerd[1484]: time="2026-03-14T00:15:16.779401752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8xsx,Uid:30a9224a-9df1-4d1c-9e70-f4d70fa9dd51,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:16.802296 containerd[1484]: time="2026-03-14T00:15:16.801692310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:16.802296 containerd[1484]: time="2026-03-14T00:15:16.801775950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:16.802296 containerd[1484]: time="2026-03-14T00:15:16.801790710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.802296 containerd[1484]: time="2026-03-14T00:15:16.801862510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:16.821836 systemd[1]: Started cri-containerd-6cea8e4c4cb3ced2f6c37033a28c98fc6d9372f8c1b937120397e7d9100b9fdf.scope - libcontainer container 6cea8e4c4cb3ced2f6c37033a28c98fc6d9372f8c1b937120397e7d9100b9fdf. Mar 14 00:15:16.845875 containerd[1484]: time="2026-03-14T00:15:16.845205586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8xsx,Uid:30a9224a-9df1-4d1c-9e70-f4d70fa9dd51,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cea8e4c4cb3ced2f6c37033a28c98fc6d9372f8c1b937120397e7d9100b9fdf\"" Mar 14 00:15:16.851101 containerd[1484]: time="2026-03-14T00:15:16.850878346Z" level=info msg="CreateContainer within sandbox \"6cea8e4c4cb3ced2f6c37033a28c98fc6d9372f8c1b937120397e7d9100b9fdf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:15:16.863277 containerd[1484]: time="2026-03-14T00:15:16.863208625Z" level=info msg="CreateContainer within sandbox \"6cea8e4c4cb3ced2f6c37033a28c98fc6d9372f8c1b937120397e7d9100b9fdf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8741e2cb03f95bdae1f53d68f25bfec51a1bc08e6a4c2cc648502287e2f0f760\"" Mar 14 00:15:16.865739 containerd[1484]: time="2026-03-14T00:15:16.865599025Z" level=info msg="StartContainer for \"8741e2cb03f95bdae1f53d68f25bfec51a1bc08e6a4c2cc648502287e2f0f760\"" Mar 14 00:15:16.899837 systemd[1]: Started cri-containerd-8741e2cb03f95bdae1f53d68f25bfec51a1bc08e6a4c2cc648502287e2f0f760.scope - libcontainer container 8741e2cb03f95bdae1f53d68f25bfec51a1bc08e6a4c2cc648502287e2f0f760. Mar 14 00:15:16.930827 containerd[1484]: time="2026-03-14T00:15:16.930683859Z" level=info msg="StartContainer for \"8741e2cb03f95bdae1f53d68f25bfec51a1bc08e6a4c2cc648502287e2f0f760\" returns successfully" Mar 14 00:15:17.651361 kubelet[2624]: I0314 00:15:17.649889 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p8xsx" podStartSLOduration=3.649873322 podStartE2EDuration="3.649873322s" podCreationTimestamp="2026-03-14 00:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:17.288332151 +0000 UTC m=+7.189400033" watchObservedRunningTime="2026-03-14 00:15:17.649873322 +0000 UTC m=+7.550941204" Mar 14 00:15:17.762229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309388437.mount: Deactivated successfully. Mar 14 00:15:18.108198 containerd[1484]: time="2026-03-14T00:15:18.108119367Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:18.110307 containerd[1484]: time="2026-03-14T00:15:18.109947247Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 14 00:15:18.111692 containerd[1484]: time="2026-03-14T00:15:18.111157486Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:18.113885 containerd[1484]: time="2026-03-14T00:15:18.113624046Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.940130244s" Mar 14 00:15:18.113885 containerd[1484]: time="2026-03-14T00:15:18.113689566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 14 00:15:18.116431 containerd[1484]: time="2026-03-14T00:15:18.115663046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:15:18.119968 containerd[1484]: time="2026-03-14T00:15:18.119889566Z" level=info msg="CreateContainer within sandbox \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:15:18.138197 containerd[1484]: time="2026-03-14T00:15:18.138086764Z" level=info msg="CreateContainer within sandbox \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\"" Mar 14 00:15:18.139648 containerd[1484]: time="2026-03-14T00:15:18.138969764Z" level=info msg="StartContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\"" Mar 14 00:15:18.171751 systemd[1]: Started cri-containerd-83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361.scope - libcontainer container 83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361. Mar 14 00:15:18.202863 containerd[1484]: time="2026-03-14T00:15:18.202785920Z" level=info msg="StartContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" returns successfully" Mar 14 00:15:18.308506 kubelet[2624]: I0314 00:15:18.308298 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7l4vz" podStartSLOduration=1.3662309879999999 podStartE2EDuration="3.308197632s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="2026-03-14 00:15:16.172967722 +0000 UTC m=+6.074035564" lastFinishedPulling="2026-03-14 00:15:18.114934326 +0000 UTC m=+8.016002208" observedRunningTime="2026-03-14 00:15:18.307698672 +0000 UTC m=+8.208766554" watchObservedRunningTime="2026-03-14 00:15:18.308197632 +0000 UTC m=+8.209265514" Mar 14 00:15:21.775783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330374776.mount: Deactivated successfully. Mar 14 00:15:23.234794 containerd[1484]: time="2026-03-14T00:15:23.234729457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:23.236417 containerd[1484]: time="2026-03-14T00:15:23.236344937Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 14 00:15:23.237527 containerd[1484]: time="2026-03-14T00:15:23.236822897Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:23.239060 containerd[1484]: time="2026-03-14T00:15:23.239026817Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.123324211s" Mar 14 00:15:23.239149 containerd[1484]: time="2026-03-14T00:15:23.239134377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 14 00:15:23.244861 containerd[1484]: time="2026-03-14T00:15:23.244829776Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:15:23.256437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971964355.mount: Deactivated successfully. Mar 14 00:15:23.258575 containerd[1484]: time="2026-03-14T00:15:23.258461095Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\"" Mar 14 00:15:23.260090 containerd[1484]: time="2026-03-14T00:15:23.260058495Z" level=info msg="StartContainer for \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\"" Mar 14 00:15:23.294680 systemd[1]: Started cri-containerd-7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d.scope - libcontainer container 7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d. Mar 14 00:15:23.329541 containerd[1484]: time="2026-03-14T00:15:23.329316851Z" level=info msg="StartContainer for \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\" returns successfully" Mar 14 00:15:23.341003 systemd[1]: cri-containerd-7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d.scope: Deactivated successfully. Mar 14 00:15:23.453853 containerd[1484]: time="2026-03-14T00:15:23.453754244Z" level=info msg="shim disconnected" id=7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d namespace=k8s.io Mar 14 00:15:23.453853 containerd[1484]: time="2026-03-14T00:15:23.453820884Z" level=warning msg="cleaning up after shim disconnected" id=7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d namespace=k8s.io Mar 14 00:15:23.453853 containerd[1484]: time="2026-03-14T00:15:23.453834604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:24.256724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d-rootfs.mount: Deactivated successfully. Mar 14 00:15:24.304252 containerd[1484]: time="2026-03-14T00:15:24.304199433Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:15:24.316583 containerd[1484]: time="2026-03-14T00:15:24.316437593Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\"" Mar 14 00:15:24.317998 containerd[1484]: time="2026-03-14T00:15:24.317668473Z" level=info msg="StartContainer for \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\"" Mar 14 00:15:24.349987 systemd[1]: run-containerd-runc-k8s.io-2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372-runc.P8caBE.mount: Deactivated successfully. Mar 14 00:15:24.359850 systemd[1]: Started cri-containerd-2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372.scope - libcontainer container 2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372. Mar 14 00:15:24.391836 containerd[1484]: time="2026-03-14T00:15:24.390837668Z" level=info msg="StartContainer for \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\" returns successfully" Mar 14 00:15:24.405290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:15:24.405569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:24.405643 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:24.414578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:15:24.414776 systemd[1]: cri-containerd-2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372.scope: Deactivated successfully. Mar 14 00:15:24.435528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:15:24.450513 containerd[1484]: time="2026-03-14T00:15:24.450293985Z" level=info msg="shim disconnected" id=2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372 namespace=k8s.io Mar 14 00:15:24.450513 containerd[1484]: time="2026-03-14T00:15:24.450357145Z" level=warning msg="cleaning up after shim disconnected" id=2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372 namespace=k8s.io Mar 14 00:15:24.450513 containerd[1484]: time="2026-03-14T00:15:24.450369345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:25.255901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372-rootfs.mount: Deactivated successfully. Mar 14 00:15:25.305413 containerd[1484]: time="2026-03-14T00:15:25.305142856Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:15:25.334110 containerd[1484]: time="2026-03-14T00:15:25.333935975Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\"" Mar 14 00:15:25.339955 containerd[1484]: time="2026-03-14T00:15:25.339879854Z" level=info msg="StartContainer for \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\"" Mar 14 00:15:25.378060 systemd[1]: Started cri-containerd-047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858.scope - libcontainer container 047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858. Mar 14 00:15:25.413282 containerd[1484]: time="2026-03-14T00:15:25.412789410Z" level=info msg="StartContainer for \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\" returns successfully" Mar 14 00:15:25.413827 systemd[1]: cri-containerd-047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858.scope: Deactivated successfully. Mar 14 00:15:25.441574 containerd[1484]: time="2026-03-14T00:15:25.441471649Z" level=info msg="shim disconnected" id=047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858 namespace=k8s.io Mar 14 00:15:25.441574 containerd[1484]: time="2026-03-14T00:15:25.441573049Z" level=warning msg="cleaning up after shim disconnected" id=047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858 namespace=k8s.io Mar 14 00:15:25.441783 containerd[1484]: time="2026-03-14T00:15:25.441587369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:26.256739 systemd[1]: run-containerd-runc-k8s.io-047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858-runc.ZcErW7.mount: Deactivated successfully. Mar 14 00:15:26.256974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858-rootfs.mount: Deactivated successfully. Mar 14 00:15:26.311297 containerd[1484]: time="2026-03-14T00:15:26.310268761Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:15:26.336144 containerd[1484]: time="2026-03-14T00:15:26.336013160Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\"" Mar 14 00:15:26.336676 containerd[1484]: time="2026-03-14T00:15:26.336644440Z" level=info msg="StartContainer for \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\"" Mar 14 00:15:26.369795 systemd[1]: Started cri-containerd-e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4.scope - libcontainer container e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4. Mar 14 00:15:26.393073 systemd[1]: cri-containerd-e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4.scope: Deactivated successfully. Mar 14 00:15:26.395899 containerd[1484]: time="2026-03-14T00:15:26.394380757Z" level=info msg="StartContainer for \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\" returns successfully" Mar 14 00:15:26.416626 containerd[1484]: time="2026-03-14T00:15:26.416482356Z" level=info msg="shim disconnected" id=e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4 namespace=k8s.io Mar 14 00:15:26.416626 containerd[1484]: time="2026-03-14T00:15:26.416623156Z" level=warning msg="cleaning up after shim disconnected" id=e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4 namespace=k8s.io Mar 14 00:15:26.416818 containerd[1484]: time="2026-03-14T00:15:26.416642396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:26.428202 containerd[1484]: time="2026-03-14T00:15:26.428151795Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:15:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:15:27.256057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4-rootfs.mount: Deactivated successfully. Mar 14 00:15:27.316001 containerd[1484]: time="2026-03-14T00:15:27.315940109Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:15:27.340624 containerd[1484]: time="2026-03-14T00:15:27.338187148Z" level=info msg="CreateContainer within sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\"" Mar 14 00:15:27.340624 containerd[1484]: time="2026-03-14T00:15:27.339682987Z" level=info msg="StartContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\"" Mar 14 00:15:27.372707 systemd[1]: Started cri-containerd-dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186.scope - libcontainer container dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186. Mar 14 00:15:27.402548 containerd[1484]: time="2026-03-14T00:15:27.402324584Z" level=info msg="StartContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" returns successfully" Mar 14 00:15:27.532061 kubelet[2624]: I0314 00:15:27.531457 2624 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:15:27.591081 systemd[1]: Created slice kubepods-burstable-pod01e3f7d5_069c_47c4_aaf2_c43ef24a4b72.slice - libcontainer container kubepods-burstable-pod01e3f7d5_069c_47c4_aaf2_c43ef24a4b72.slice. Mar 14 00:15:27.600324 systemd[1]: Created slice kubepods-burstable-pod68305360_2d61_41c9_ada2_77d3a0fa3d4e.slice - libcontainer container kubepods-burstable-pod68305360_2d61_41c9_ada2_77d3a0fa3d4e.slice. Mar 14 00:15:27.625052 kubelet[2624]: I0314 00:15:27.624710 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdtr7\" (UniqueName: \"kubernetes.io/projected/01e3f7d5-069c-47c4-aaf2-c43ef24a4b72-kube-api-access-jdtr7\") pod \"coredns-674b8bbfcf-zwx4j\" (UID: \"01e3f7d5-069c-47c4-aaf2-c43ef24a4b72\") " pod="kube-system/coredns-674b8bbfcf-zwx4j" Mar 14 00:15:27.625052 kubelet[2624]: I0314 00:15:27.624814 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01e3f7d5-069c-47c4-aaf2-c43ef24a4b72-config-volume\") pod \"coredns-674b8bbfcf-zwx4j\" (UID: \"01e3f7d5-069c-47c4-aaf2-c43ef24a4b72\") " pod="kube-system/coredns-674b8bbfcf-zwx4j" Mar 14 00:15:27.625052 kubelet[2624]: I0314 00:15:27.624901 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtwv\" (UniqueName: \"kubernetes.io/projected/68305360-2d61-41c9-ada2-77d3a0fa3d4e-kube-api-access-fhtwv\") pod \"coredns-674b8bbfcf-xzn2n\" (UID: \"68305360-2d61-41c9-ada2-77d3a0fa3d4e\") " pod="kube-system/coredns-674b8bbfcf-xzn2n" Mar 14 00:15:27.625052 kubelet[2624]: I0314 00:15:27.624954 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68305360-2d61-41c9-ada2-77d3a0fa3d4e-config-volume\") pod \"coredns-674b8bbfcf-xzn2n\" (UID: \"68305360-2d61-41c9-ada2-77d3a0fa3d4e\") " pod="kube-system/coredns-674b8bbfcf-xzn2n" Mar 14 00:15:27.895155 containerd[1484]: time="2026-03-14T00:15:27.895110439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwx4j,Uid:01e3f7d5-069c-47c4-aaf2-c43ef24a4b72,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:27.904881 containerd[1484]: time="2026-03-14T00:15:27.904302359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xzn2n,Uid:68305360-2d61-41c9-ada2-77d3a0fa3d4e,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:28.340993 kubelet[2624]: I0314 00:15:28.340893 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xpxdx" podStartSLOduration=7.3810900329999996 podStartE2EDuration="14.340822017s" podCreationTimestamp="2026-03-14 00:15:14 +0000 UTC" firstStartedPulling="2026-03-14 00:15:16.280524153 +0000 UTC m=+6.181592035" lastFinishedPulling="2026-03-14 00:15:23.240256137 +0000 UTC m=+13.141324019" observedRunningTime="2026-03-14 00:15:28.340188817 +0000 UTC m=+18.241256739" watchObservedRunningTime="2026-03-14 00:15:28.340822017 +0000 UTC m=+18.241889939" Mar 14 00:15:29.534859 systemd-networkd[1364]: cilium_host: Link UP Mar 14 00:15:29.536115 systemd-networkd[1364]: cilium_net: Link UP Mar 14 00:15:29.540537 systemd-networkd[1364]: cilium_net: Gained carrier Mar 14 00:15:29.540747 systemd-networkd[1364]: cilium_host: Gained carrier Mar 14 00:15:29.652975 systemd-networkd[1364]: cilium_vxlan: Link UP Mar 14 00:15:29.652981 systemd-networkd[1364]: cilium_vxlan: Gained carrier Mar 14 00:15:29.930619 kernel: NET: Registered PF_ALG protocol family Mar 14 00:15:29.967682 systemd-networkd[1364]: cilium_net: Gained IPv6LL Mar 14 00:15:30.375687 systemd-networkd[1364]: cilium_host: Gained IPv6LL Mar 14 00:15:30.667122 systemd-networkd[1364]: lxc_health: Link UP Mar 14 00:15:30.673886 systemd-networkd[1364]: lxc_health: Gained carrier Mar 14 00:15:30.948071 systemd-networkd[1364]: lxc9526259b2540: Link UP Mar 14 00:15:30.955522 kernel: eth0: renamed from tmpaf69d Mar 14 00:15:30.964865 systemd-networkd[1364]: lxc9526259b2540: Gained carrier Mar 14 00:15:30.976880 systemd-networkd[1364]: lxc741478b00084: Link UP Mar 14 00:15:30.985528 kernel: eth0: renamed from tmp6ba57 Mar 14 00:15:30.990049 systemd-networkd[1364]: lxc741478b00084: Gained carrier Mar 14 00:15:31.400599 systemd-networkd[1364]: cilium_vxlan: Gained IPv6LL Mar 14 00:15:32.551744 systemd-networkd[1364]: lxc9526259b2540: Gained IPv6LL Mar 14 00:15:32.616310 systemd-networkd[1364]: lxc_health: Gained IPv6LL Mar 14 00:15:32.872209 systemd-networkd[1364]: lxc741478b00084: Gained IPv6LL Mar 14 00:15:34.834614 containerd[1484]: time="2026-03-14T00:15:34.830026210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:34.834614 containerd[1484]: time="2026-03-14T00:15:34.830483610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:34.834614 containerd[1484]: time="2026-03-14T00:15:34.830531250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:34.834614 containerd[1484]: time="2026-03-14T00:15:34.830621130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:34.855809 containerd[1484]: time="2026-03-14T00:15:34.855368369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:34.855809 containerd[1484]: time="2026-03-14T00:15:34.855430449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:34.855809 containerd[1484]: time="2026-03-14T00:15:34.855582849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:34.857053 containerd[1484]: time="2026-03-14T00:15:34.856953169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:34.875254 systemd[1]: Started cri-containerd-6ba5759593727fa7c23802ba969b53ec0fb799c2439afd8f1ae02ac55c4ec840.scope - libcontainer container 6ba5759593727fa7c23802ba969b53ec0fb799c2439afd8f1ae02ac55c4ec840. Mar 14 00:15:34.895043 systemd[1]: Started cri-containerd-af69db3ce2ea98d0af3a5e5d90f01a1942ea12e755941aff4d0dd07e36cc166f.scope - libcontainer container af69db3ce2ea98d0af3a5e5d90f01a1942ea12e755941aff4d0dd07e36cc166f. Mar 14 00:15:34.956896 containerd[1484]: time="2026-03-14T00:15:34.956858485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xzn2n,Uid:68305360-2d61-41c9-ada2-77d3a0fa3d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ba5759593727fa7c23802ba969b53ec0fb799c2439afd8f1ae02ac55c4ec840\"" Mar 14 00:15:34.966544 containerd[1484]: time="2026-03-14T00:15:34.966445005Z" level=info msg="CreateContainer within sandbox \"6ba5759593727fa7c23802ba969b53ec0fb799c2439afd8f1ae02ac55c4ec840\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:34.968395 containerd[1484]: time="2026-03-14T00:15:34.968361205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwx4j,Uid:01e3f7d5-069c-47c4-aaf2-c43ef24a4b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"af69db3ce2ea98d0af3a5e5d90f01a1942ea12e755941aff4d0dd07e36cc166f\"" Mar 14 00:15:34.973616 containerd[1484]: time="2026-03-14T00:15:34.973572925Z" level=info msg="CreateContainer within sandbox \"af69db3ce2ea98d0af3a5e5d90f01a1942ea12e755941aff4d0dd07e36cc166f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:34.988988 containerd[1484]: time="2026-03-14T00:15:34.988936964Z" level=info msg="CreateContainer within sandbox \"6ba5759593727fa7c23802ba969b53ec0fb799c2439afd8f1ae02ac55c4ec840\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ea1d9fc22f0e83cc5d27a78171247514c9887cd80ea1729dfb9b4b27fd44f8e\"" Mar 14 00:15:34.991020 containerd[1484]: time="2026-03-14T00:15:34.990974124Z" level=info msg="StartContainer for \"4ea1d9fc22f0e83cc5d27a78171247514c9887cd80ea1729dfb9b4b27fd44f8e\"" Mar 14 00:15:34.992861 containerd[1484]: time="2026-03-14T00:15:34.992784764Z" level=info msg="CreateContainer within sandbox \"af69db3ce2ea98d0af3a5e5d90f01a1942ea12e755941aff4d0dd07e36cc166f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9634ffb8f35dbade4b34b8c38b7b8102576aa72168150f05f67d4b637f067fb8\"" Mar 14 00:15:34.995858 containerd[1484]: time="2026-03-14T00:15:34.995711404Z" level=info msg="StartContainer for \"9634ffb8f35dbade4b34b8c38b7b8102576aa72168150f05f67d4b637f067fb8\"" Mar 14 00:15:35.036709 systemd[1]: Started cri-containerd-4ea1d9fc22f0e83cc5d27a78171247514c9887cd80ea1729dfb9b4b27fd44f8e.scope - libcontainer container 4ea1d9fc22f0e83cc5d27a78171247514c9887cd80ea1729dfb9b4b27fd44f8e. Mar 14 00:15:35.044534 systemd[1]: Started cri-containerd-9634ffb8f35dbade4b34b8c38b7b8102576aa72168150f05f67d4b637f067fb8.scope - libcontainer container 9634ffb8f35dbade4b34b8c38b7b8102576aa72168150f05f67d4b637f067fb8. Mar 14 00:15:35.098613 containerd[1484]: time="2026-03-14T00:15:35.097697200Z" level=info msg="StartContainer for \"4ea1d9fc22f0e83cc5d27a78171247514c9887cd80ea1729dfb9b4b27fd44f8e\" returns successfully" Mar 14 00:15:35.102942 containerd[1484]: time="2026-03-14T00:15:35.102671240Z" level=info msg="StartContainer for \"9634ffb8f35dbade4b34b8c38b7b8102576aa72168150f05f67d4b637f067fb8\" returns successfully" Mar 14 00:15:35.355346 kubelet[2624]: I0314 00:15:35.353913 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xzn2n" podStartSLOduration=20.35384075 podStartE2EDuration="20.35384075s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:35.35311595 +0000 UTC m=+25.254184072" watchObservedRunningTime="2026-03-14 00:15:35.35384075 +0000 UTC m=+25.254908712" Mar 14 00:15:35.368902 kubelet[2624]: I0314 00:15:35.368820 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zwx4j" podStartSLOduration=20.368802709 podStartE2EDuration="20.368802709s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:35.366758669 +0000 UTC m=+25.267826551" watchObservedRunningTime="2026-03-14 00:15:35.368802709 +0000 UTC m=+25.269870591" Mar 14 00:15:35.851704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626218468.mount: Deactivated successfully. Mar 14 00:15:46.656902 systemd[1]: Started sshd@11-168.119.153.241:22-178.208.94.76:56398.service - OpenSSH per-connection server daemon (178.208.94.76:56398). Mar 14 00:15:46.968911 sshd[4018]: Connection closed by authenticating user root 178.208.94.76 port 56398 [preauth] Mar 14 00:15:46.971869 systemd[1]: sshd@11-168.119.153.241:22-178.208.94.76:56398.service: Deactivated successfully. Mar 14 00:16:17.050807 systemd[1]: Started sshd@12-168.119.153.241:22-178.208.94.76:50002.service - OpenSSH per-connection server daemon (178.208.94.76:50002). Mar 14 00:16:17.358346 sshd[4028]: Connection closed by authenticating user root 178.208.94.76 port 50002 [preauth] Mar 14 00:16:17.362193 systemd[1]: sshd@12-168.119.153.241:22-178.208.94.76:50002.service: Deactivated successfully. Mar 14 00:16:47.450054 systemd[1]: Started sshd@13-168.119.153.241:22-178.208.94.76:55418.service - OpenSSH per-connection server daemon (178.208.94.76:55418). Mar 14 00:16:47.753366 sshd[4037]: Connection closed by authenticating user root 178.208.94.76 port 55418 [preauth] Mar 14 00:16:47.757788 systemd[1]: sshd@13-168.119.153.241:22-178.208.94.76:55418.service: Deactivated successfully. Mar 14 00:17:29.005768 systemd[1]: Started sshd@14-168.119.153.241:22-68.220.241.50:45832.service - OpenSSH per-connection server daemon (68.220.241.50:45832). Mar 14 00:17:29.591623 sshd[4047]: Accepted publickey for core from 68.220.241.50 port 45832 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:29.593692 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:29.598725 systemd-logind[1452]: New session 8 of user core. Mar 14 00:17:29.604660 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:17:30.088261 sshd[4047]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:30.092988 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:17:30.094422 systemd[1]: sshd@14-168.119.153.241:22-68.220.241.50:45832.service: Deactivated successfully. Mar 14 00:17:30.096423 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:17:30.099708 systemd-logind[1452]: Removed session 8. Mar 14 00:17:35.199544 systemd[1]: Started sshd@15-168.119.153.241:22-68.220.241.50:35846.service - OpenSSH per-connection server daemon (68.220.241.50:35846). Mar 14 00:17:35.801073 sshd[4062]: Accepted publickey for core from 68.220.241.50 port 35846 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:35.803140 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:35.807452 systemd-logind[1452]: New session 9 of user core. Mar 14 00:17:35.817773 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:17:36.288247 sshd[4062]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:36.294416 systemd[1]: sshd@15-168.119.153.241:22-68.220.241.50:35846.service: Deactivated successfully. Mar 14 00:17:36.297340 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:36.298392 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:36.300554 systemd-logind[1452]: Removed session 9. Mar 14 00:17:41.397777 systemd[1]: Started sshd@16-168.119.153.241:22-68.220.241.50:35860.service - OpenSSH per-connection server daemon (68.220.241.50:35860). Mar 14 00:17:41.984597 sshd[4076]: Accepted publickey for core from 68.220.241.50 port 35860 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:41.988076 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:41.993192 systemd-logind[1452]: New session 10 of user core. Mar 14 00:17:42.000795 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:17:42.475763 sshd[4076]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:42.482569 systemd[1]: sshd@16-168.119.153.241:22-68.220.241.50:35860.service: Deactivated successfully. Mar 14 00:17:42.482596 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:17:42.484549 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:17:42.487308 systemd-logind[1452]: Removed session 10. Mar 14 00:17:42.585845 systemd[1]: Started sshd@17-168.119.153.241:22-68.220.241.50:35268.service - OpenSSH per-connection server daemon (68.220.241.50:35268). Mar 14 00:17:43.168927 sshd[4090]: Accepted publickey for core from 68.220.241.50 port 35268 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:43.171934 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:43.179857 systemd-logind[1452]: New session 11 of user core. Mar 14 00:17:43.183780 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:17:43.690792 sshd[4090]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:43.695138 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:17:43.695556 systemd[1]: sshd@17-168.119.153.241:22-68.220.241.50:35268.service: Deactivated successfully. Mar 14 00:17:43.699284 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:17:43.700524 systemd-logind[1452]: Removed session 11. Mar 14 00:17:43.799237 systemd[1]: Started sshd@18-168.119.153.241:22-68.220.241.50:35272.service - OpenSSH per-connection server daemon (68.220.241.50:35272). Mar 14 00:17:44.393791 sshd[4101]: Accepted publickey for core from 68.220.241.50 port 35272 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:44.396371 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:44.402742 systemd-logind[1452]: New session 12 of user core. Mar 14 00:17:44.406734 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:17:44.882164 sshd[4101]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:44.887061 systemd[1]: sshd@18-168.119.153.241:22-68.220.241.50:35272.service: Deactivated successfully. Mar 14 00:17:44.889829 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:17:44.891701 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:17:44.893144 systemd-logind[1452]: Removed session 12. Mar 14 00:17:49.991318 systemd[1]: Started sshd@19-168.119.153.241:22-68.220.241.50:35278.service - OpenSSH per-connection server daemon (68.220.241.50:35278). Mar 14 00:17:50.586552 sshd[4116]: Accepted publickey for core from 68.220.241.50 port 35278 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:50.588299 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:50.596892 systemd-logind[1452]: New session 13 of user core. Mar 14 00:17:50.601648 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:17:51.077041 sshd[4116]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:51.083457 systemd[1]: sshd@19-168.119.153.241:22-68.220.241.50:35278.service: Deactivated successfully. Mar 14 00:17:51.085799 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:17:51.086794 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:17:51.088015 systemd-logind[1452]: Removed session 13. Mar 14 00:17:56.194951 systemd[1]: Started sshd@20-168.119.153.241:22-68.220.241.50:39362.service - OpenSSH per-connection server daemon (68.220.241.50:39362). Mar 14 00:17:56.797544 sshd[4129]: Accepted publickey for core from 68.220.241.50 port 39362 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:56.799716 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:56.805771 systemd-logind[1452]: New session 14 of user core. Mar 14 00:17:56.812726 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:17:57.299161 sshd[4129]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:57.304905 systemd[1]: sshd@20-168.119.153.241:22-68.220.241.50:39362.service: Deactivated successfully. Mar 14 00:17:57.306720 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:17:57.307427 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:17:57.308987 systemd-logind[1452]: Removed session 14. Mar 14 00:17:57.409952 systemd[1]: Started sshd@21-168.119.153.241:22-68.220.241.50:39372.service - OpenSSH per-connection server daemon (68.220.241.50:39372). Mar 14 00:17:57.995313 sshd[4142]: Accepted publickey for core from 68.220.241.50 port 39372 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:57.998527 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:58.005993 systemd-logind[1452]: New session 15 of user core. Mar 14 00:17:58.010837 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:17:58.529001 sshd[4142]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:58.534567 systemd[1]: sshd@21-168.119.153.241:22-68.220.241.50:39372.service: Deactivated successfully. Mar 14 00:17:58.537362 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:17:58.539197 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:17:58.540239 systemd-logind[1452]: Removed session 15. Mar 14 00:17:58.641967 systemd[1]: Started sshd@22-168.119.153.241:22-68.220.241.50:39388.service - OpenSSH per-connection server daemon (68.220.241.50:39388). Mar 14 00:17:59.229533 sshd[4153]: Accepted publickey for core from 68.220.241.50 port 39388 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:59.230756 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:59.236244 systemd-logind[1452]: New session 16 of user core. Mar 14 00:17:59.245813 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:18:00.340781 sshd[4153]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:00.346227 systemd[1]: sshd@22-168.119.153.241:22-68.220.241.50:39388.service: Deactivated successfully. Mar 14 00:18:00.349993 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:18:00.351251 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:18:00.352534 systemd-logind[1452]: Removed session 16. Mar 14 00:18:00.448851 systemd[1]: Started sshd@23-168.119.153.241:22-68.220.241.50:39390.service - OpenSSH per-connection server daemon (68.220.241.50:39390). Mar 14 00:18:01.037533 sshd[4171]: Accepted publickey for core from 68.220.241.50 port 39390 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:01.039345 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:01.046380 systemd-logind[1452]: New session 17 of user core. Mar 14 00:18:01.054895 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:18:01.638626 sshd[4171]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:01.646331 systemd[1]: sshd@23-168.119.153.241:22-68.220.241.50:39390.service: Deactivated successfully. Mar 14 00:18:01.648591 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:18:01.651851 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:18:01.653485 systemd-logind[1452]: Removed session 17. Mar 14 00:18:01.749930 systemd[1]: Started sshd@24-168.119.153.241:22-68.220.241.50:39396.service - OpenSSH per-connection server daemon (68.220.241.50:39396). Mar 14 00:18:02.339253 sshd[4182]: Accepted publickey for core from 68.220.241.50 port 39396 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:02.340664 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:02.346298 systemd-logind[1452]: New session 18 of user core. Mar 14 00:18:02.353734 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:18:02.824002 sshd[4182]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:02.829165 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:18:02.830090 systemd[1]: sshd@24-168.119.153.241:22-68.220.241.50:39396.service: Deactivated successfully. Mar 14 00:18:02.832751 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:18:02.834647 systemd-logind[1452]: Removed session 18. Mar 14 00:18:07.947306 systemd[1]: Started sshd@25-168.119.153.241:22-68.220.241.50:47216.service - OpenSSH per-connection server daemon (68.220.241.50:47216). Mar 14 00:18:08.529804 sshd[4197]: Accepted publickey for core from 68.220.241.50 port 47216 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:08.532115 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:08.538392 systemd-logind[1452]: New session 19 of user core. Mar 14 00:18:08.547738 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:18:09.023391 sshd[4197]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:09.029296 systemd[1]: sshd@25-168.119.153.241:22-68.220.241.50:47216.service: Deactivated successfully. Mar 14 00:18:09.031023 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:18:09.033747 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:18:09.036484 systemd-logind[1452]: Removed session 19. Mar 14 00:18:14.129801 systemd[1]: Started sshd@26-168.119.153.241:22-68.220.241.50:39150.service - OpenSSH per-connection server daemon (68.220.241.50:39150). Mar 14 00:18:14.715723 sshd[4212]: Accepted publickey for core from 68.220.241.50 port 39150 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:14.717722 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:14.722535 systemd-logind[1452]: New session 20 of user core. Mar 14 00:18:14.729005 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:18:15.199147 sshd[4212]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:15.204588 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:18:15.204825 systemd[1]: sshd@26-168.119.153.241:22-68.220.241.50:39150.service: Deactivated successfully. Mar 14 00:18:15.208107 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:18:15.209260 systemd-logind[1452]: Removed session 20. Mar 14 00:18:15.309890 systemd[1]: Started sshd@27-168.119.153.241:22-68.220.241.50:39152.service - OpenSSH per-connection server daemon (68.220.241.50:39152). Mar 14 00:18:15.893073 sshd[4224]: Accepted publickey for core from 68.220.241.50 port 39152 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:15.895671 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:15.899848 systemd-logind[1452]: New session 21 of user core. Mar 14 00:18:15.906829 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:18:17.900880 containerd[1484]: time="2026-03-14T00:18:17.900839267Z" level=info msg="StopContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" with timeout 30 (s)" Mar 14 00:18:17.904389 containerd[1484]: time="2026-03-14T00:18:17.903802196Z" level=info msg="Stop container \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" with signal terminated" Mar 14 00:18:17.923118 containerd[1484]: time="2026-03-14T00:18:17.923073577Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:18:17.926859 systemd[1]: cri-containerd-83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361.scope: Deactivated successfully. Mar 14 00:18:17.934434 containerd[1484]: time="2026-03-14T00:18:17.934030212Z" level=info msg="StopContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" with timeout 2 (s)" Mar 14 00:18:17.934434 containerd[1484]: time="2026-03-14T00:18:17.934361373Z" level=info msg="Stop container \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" with signal terminated" Mar 14 00:18:17.943627 systemd-networkd[1364]: lxc_health: Link DOWN Mar 14 00:18:17.943635 systemd-networkd[1364]: lxc_health: Lost carrier Mar 14 00:18:17.961633 systemd[1]: cri-containerd-dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186.scope: Deactivated successfully. Mar 14 00:18:17.962356 systemd[1]: cri-containerd-dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186.scope: Consumed 7.189s CPU time. Mar 14 00:18:17.968187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361-rootfs.mount: Deactivated successfully. Mar 14 00:18:17.977427 containerd[1484]: time="2026-03-14T00:18:17.976970348Z" level=info msg="shim disconnected" id=83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361 namespace=k8s.io Mar 14 00:18:17.977427 containerd[1484]: time="2026-03-14T00:18:17.977032389Z" level=warning msg="cleaning up after shim disconnected" id=83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361 namespace=k8s.io Mar 14 00:18:17.977427 containerd[1484]: time="2026-03-14T00:18:17.977043989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:17.990925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186-rootfs.mount: Deactivated successfully. Mar 14 00:18:17.995536 containerd[1484]: time="2026-03-14T00:18:17.995462047Z" level=info msg="shim disconnected" id=dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186 namespace=k8s.io Mar 14 00:18:17.995536 containerd[1484]: time="2026-03-14T00:18:17.995531007Z" level=warning msg="cleaning up after shim disconnected" id=dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186 namespace=k8s.io Mar 14 00:18:17.995536 containerd[1484]: time="2026-03-14T00:18:17.995540567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:18.005164 containerd[1484]: time="2026-03-14T00:18:18.005108958Z" level=info msg="StopContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" returns successfully" Mar 14 00:18:18.007926 containerd[1484]: time="2026-03-14T00:18:18.005778240Z" level=info msg="StopPodSandbox for \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\"" Mar 14 00:18:18.007926 containerd[1484]: time="2026-03-14T00:18:18.005824120Z" level=info msg="Container to stop \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.011641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1-shm.mount: Deactivated successfully. Mar 14 00:18:18.014685 systemd[1]: cri-containerd-c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1.scope: Deactivated successfully. Mar 14 00:18:18.019239 containerd[1484]: time="2026-03-14T00:18:18.019196162Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:18:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:18:18.025118 containerd[1484]: time="2026-03-14T00:18:18.024679619Z" level=info msg="StopContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" returns successfully" Mar 14 00:18:18.025421 containerd[1484]: time="2026-03-14T00:18:18.025393542Z" level=info msg="StopPodSandbox for \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\"" Mar 14 00:18:18.025456 containerd[1484]: time="2026-03-14T00:18:18.025437342Z" level=info msg="Container to stop \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.025456 containerd[1484]: time="2026-03-14T00:18:18.025449582Z" level=info msg="Container to stop \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.025524 containerd[1484]: time="2026-03-14T00:18:18.025458982Z" level=info msg="Container to stop \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.025524 containerd[1484]: time="2026-03-14T00:18:18.025467982Z" level=info msg="Container to stop \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.025524 containerd[1484]: time="2026-03-14T00:18:18.025476462Z" level=info msg="Container to stop \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:18:18.027613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12-shm.mount: Deactivated successfully. Mar 14 00:18:18.046637 systemd[1]: cri-containerd-fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12.scope: Deactivated successfully. Mar 14 00:18:18.066619 containerd[1484]: time="2026-03-14T00:18:18.066547512Z" level=info msg="shim disconnected" id=c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1 namespace=k8s.io Mar 14 00:18:18.066619 containerd[1484]: time="2026-03-14T00:18:18.066608312Z" level=warning msg="cleaning up after shim disconnected" id=c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1 namespace=k8s.io Mar 14 00:18:18.066619 containerd[1484]: time="2026-03-14T00:18:18.066617232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:18.076814 containerd[1484]: time="2026-03-14T00:18:18.076625104Z" level=info msg="shim disconnected" id=fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12 namespace=k8s.io Mar 14 00:18:18.076814 containerd[1484]: time="2026-03-14T00:18:18.076686744Z" level=warning msg="cleaning up after shim disconnected" id=fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12 namespace=k8s.io Mar 14 00:18:18.076814 containerd[1484]: time="2026-03-14T00:18:18.076694384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:18.090279 containerd[1484]: time="2026-03-14T00:18:18.089806065Z" level=info msg="TearDown network for sandbox \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\" successfully" Mar 14 00:18:18.090279 containerd[1484]: time="2026-03-14T00:18:18.089842105Z" level=info msg="StopPodSandbox for \"c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1\" returns successfully" Mar 14 00:18:18.091822 containerd[1484]: time="2026-03-14T00:18:18.091766792Z" level=info msg="TearDown network for sandbox \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" successfully" Mar 14 00:18:18.091822 containerd[1484]: time="2026-03-14T00:18:18.091798232Z" level=info msg="StopPodSandbox for \"fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12\" returns successfully" Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215335 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsjk\" (UniqueName: \"kubernetes.io/projected/e9815b12-f48c-4978-9810-3e75217b21ac-kube-api-access-fvsjk\") pod \"e9815b12-f48c-4978-9810-3e75217b21ac\" (UID: \"e9815b12-f48c-4978-9810-3e75217b21ac\") " Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215417 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cni-path\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215462 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-run\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215525 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-bpf-maps\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215563 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-kernel\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.216107 kubelet[2624]: I0314 00:18:18.215620 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx486\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-kube-api-access-tx486\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215655 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-etc-cni-netd\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215756 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hostproc\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215803 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-config-path\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215836 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-net\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215876 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hubble-tls\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220056 kubelet[2624]: I0314 00:18:18.215908 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-cgroup\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.215944 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9815b12-f48c-4978-9810-3e75217b21ac-cilium-config-path\") pod \"e9815b12-f48c-4978-9810-3e75217b21ac\" (UID: \"e9815b12-f48c-4978-9810-3e75217b21ac\") " Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.215984 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-clustermesh-secrets\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.216018 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-lib-modules\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.216055 2624 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-xtables-lock\") pod \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\" (UID: \"45336a9c-2c47-4ea2-91a1-ceecce85a2d1\") " Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.216188 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220431 kubelet[2624]: I0314 00:18:18.216250 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220836 kubelet[2624]: I0314 00:18:18.216281 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220836 kubelet[2624]: I0314 00:18:18.216311 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220836 kubelet[2624]: I0314 00:18:18.216342 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220836 kubelet[2624]: I0314 00:18:18.218563 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.220836 kubelet[2624]: I0314 00:18:18.218619 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.221748 kubelet[2624]: I0314 00:18:18.221399 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.221748 kubelet[2624]: I0314 00:18:18.221486 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.223604 kubelet[2624]: I0314 00:18:18.223384 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:18:18.229670 kubelet[2624]: I0314 00:18:18.229589 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-kube-api-access-tx486" (OuterVolumeSpecName: "kube-api-access-tx486") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "kube-api-access-tx486". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:18.231506 kubelet[2624]: I0314 00:18:18.231329 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9815b12-f48c-4978-9810-3e75217b21ac-kube-api-access-fvsjk" (OuterVolumeSpecName: "kube-api-access-fvsjk") pod "e9815b12-f48c-4978-9810-3e75217b21ac" (UID: "e9815b12-f48c-4978-9810-3e75217b21ac"). InnerVolumeSpecName "kube-api-access-fvsjk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:18.231836 kubelet[2624]: I0314 00:18:18.231740 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:18.232220 kubelet[2624]: I0314 00:18:18.232196 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:18:18.232593 kubelet[2624]: I0314 00:18:18.232570 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9815b12-f48c-4978-9810-3e75217b21ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9815b12-f48c-4978-9810-3e75217b21ac" (UID: "e9815b12-f48c-4978-9810-3e75217b21ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:18:18.233244 kubelet[2624]: I0314 00:18:18.233200 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45336a9c-2c47-4ea2-91a1-ceecce85a2d1" (UID: "45336a9c-2c47-4ea2-91a1-ceecce85a2d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:18:18.246229 systemd[1]: Removed slice kubepods-burstable-pod45336a9c_2c47_4ea2_91a1_ceecce85a2d1.slice - libcontainer container kubepods-burstable-pod45336a9c_2c47_4ea2_91a1_ceecce85a2d1.slice. Mar 14 00:18:18.246340 systemd[1]: kubepods-burstable-pod45336a9c_2c47_4ea2_91a1_ceecce85a2d1.slice: Consumed 7.274s CPU time. Mar 14 00:18:18.250968 systemd[1]: Removed slice kubepods-besteffort-pode9815b12_f48c_4978_9810_3e75217b21ac.slice - libcontainer container kubepods-besteffort-pode9815b12_f48c_4978_9810_3e75217b21ac.slice. Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316584 2624 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tx486\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-kube-api-access-tx486\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316640 2624 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-etc-cni-netd\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316651 2624 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hostproc\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316660 2624 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-config-path\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316669 2624 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-net\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316678 2624 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-hubble-tls\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316686 2624 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-cgroup\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.316689 kubelet[2624]: I0314 00:18:18.316695 2624 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9815b12-f48c-4978-9810-3e75217b21ac-cilium-config-path\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316703 2624 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-clustermesh-secrets\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316713 2624 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-lib-modules\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316734 2624 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-xtables-lock\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316743 2624 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvsjk\" (UniqueName: \"kubernetes.io/projected/e9815b12-f48c-4978-9810-3e75217b21ac-kube-api-access-fvsjk\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316753 2624 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cni-path\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316762 2624 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-cilium-run\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316770 2624 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-bpf-maps\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.317185 kubelet[2624]: I0314 00:18:18.316778 2624 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45336a9c-2c47-4ea2-91a1-ceecce85a2d1-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-c13e9e2860\" DevicePath \"\"" Mar 14 00:18:18.749757 kubelet[2624]: I0314 00:18:18.749692 2624 scope.go:117] "RemoveContainer" containerID="83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361" Mar 14 00:18:18.755395 containerd[1484]: time="2026-03-14T00:18:18.754954888Z" level=info msg="RemoveContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\"" Mar 14 00:18:18.764443 containerd[1484]: time="2026-03-14T00:18:18.764401518Z" level=info msg="RemoveContainer for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" returns successfully" Mar 14 00:18:18.765676 kubelet[2624]: I0314 00:18:18.765351 2624 scope.go:117] "RemoveContainer" containerID="83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361" Mar 14 00:18:18.765961 kubelet[2624]: E0314 00:18:18.765927 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\": not found" containerID="83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361" Mar 14 00:18:18.766161 containerd[1484]: time="2026-03-14T00:18:18.765790682Z" level=error msg="ContainerStatus for \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\": not found" Mar 14 00:18:18.766286 kubelet[2624]: I0314 00:18:18.765956 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361"} err="failed to get container status \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\": rpc error: code = NotFound desc = an error occurred when try to find container \"83e98da44086fea5d8b10a14e401b63476edd51c211194e8423a8d95a34d6361\": not found" Mar 14 00:18:18.766286 kubelet[2624]: I0314 00:18:18.765985 2624 scope.go:117] "RemoveContainer" containerID="dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186" Mar 14 00:18:18.767120 containerd[1484]: time="2026-03-14T00:18:18.767068766Z" level=info msg="RemoveContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\"" Mar 14 00:18:18.772275 containerd[1484]: time="2026-03-14T00:18:18.772237623Z" level=info msg="RemoveContainer for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" returns successfully" Mar 14 00:18:18.773302 kubelet[2624]: I0314 00:18:18.773275 2624 scope.go:117] "RemoveContainer" containerID="e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4" Mar 14 00:18:18.776419 containerd[1484]: time="2026-03-14T00:18:18.776386076Z" level=info msg="RemoveContainer for \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\"" Mar 14 00:18:18.781761 containerd[1484]: time="2026-03-14T00:18:18.781475012Z" level=info msg="RemoveContainer for \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\" returns successfully" Mar 14 00:18:18.782000 kubelet[2624]: I0314 00:18:18.781959 2624 scope.go:117] "RemoveContainer" containerID="047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858" Mar 14 00:18:18.784514 containerd[1484]: time="2026-03-14T00:18:18.784476302Z" level=info msg="RemoveContainer for \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\"" Mar 14 00:18:18.795483 containerd[1484]: time="2026-03-14T00:18:18.795442576Z" level=info msg="RemoveContainer for \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\" returns successfully" Mar 14 00:18:18.796175 kubelet[2624]: I0314 00:18:18.795852 2624 scope.go:117] "RemoveContainer" containerID="2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372" Mar 14 00:18:18.797255 containerd[1484]: time="2026-03-14T00:18:18.797157222Z" level=info msg="RemoveContainer for \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\"" Mar 14 00:18:18.799851 containerd[1484]: time="2026-03-14T00:18:18.799809390Z" level=info msg="RemoveContainer for \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\" returns successfully" Mar 14 00:18:18.800105 kubelet[2624]: I0314 00:18:18.800056 2624 scope.go:117] "RemoveContainer" containerID="7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d" Mar 14 00:18:18.801270 containerd[1484]: time="2026-03-14T00:18:18.801248315Z" level=info msg="RemoveContainer for \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\"" Mar 14 00:18:18.805099 containerd[1484]: time="2026-03-14T00:18:18.804986846Z" level=info msg="RemoveContainer for \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\" returns successfully" Mar 14 00:18:18.805395 kubelet[2624]: I0314 00:18:18.805277 2624 scope.go:117] "RemoveContainer" containerID="dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186" Mar 14 00:18:18.805683 containerd[1484]: time="2026-03-14T00:18:18.805568288Z" level=error msg="ContainerStatus for \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\": not found" Mar 14 00:18:18.805867 kubelet[2624]: E0314 00:18:18.805813 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\": not found" containerID="dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186" Mar 14 00:18:18.805867 kubelet[2624]: I0314 00:18:18.805838 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186"} err="failed to get container status \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\": rpc error: code = NotFound desc = an error occurred when try to find container \"dee33471af8849fefb1de0e742c897225973e4fd150bcb68f4c496c27eb21186\": not found" Mar 14 00:18:18.806182 kubelet[2624]: I0314 00:18:18.805858 2624 scope.go:117] "RemoveContainer" containerID="e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4" Mar 14 00:18:18.806324 containerd[1484]: time="2026-03-14T00:18:18.806067250Z" level=error msg="ContainerStatus for \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\": not found" Mar 14 00:18:18.806519 kubelet[2624]: E0314 00:18:18.806191 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\": not found" containerID="e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4" Mar 14 00:18:18.806519 kubelet[2624]: I0314 00:18:18.806212 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4"} err="failed to get container status \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e262ed251801c1cc0009efb0e6d19f0793cc51e6fdc3c18fcf1724fe9661c8c4\": not found" Mar 14 00:18:18.806519 kubelet[2624]: I0314 00:18:18.806229 2624 scope.go:117] "RemoveContainer" containerID="047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858" Mar 14 00:18:18.806760 containerd[1484]: time="2026-03-14T00:18:18.806433491Z" level=error msg="ContainerStatus for \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\": not found" Mar 14 00:18:18.807260 kubelet[2624]: E0314 00:18:18.807034 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\": not found" containerID="047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858" Mar 14 00:18:18.807260 kubelet[2624]: I0314 00:18:18.807086 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858"} err="failed to get container status \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\": rpc error: code = NotFound desc = an error occurred when try to find container \"047a862832bc20edb2add83b891b0169180f1531469f3790b05f14d6d7fff858\": not found" Mar 14 00:18:18.807260 kubelet[2624]: I0314 00:18:18.807123 2624 scope.go:117] "RemoveContainer" containerID="2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372" Mar 14 00:18:18.807753 kubelet[2624]: E0314 00:18:18.807539 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\": not found" containerID="2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372" Mar 14 00:18:18.807753 kubelet[2624]: I0314 00:18:18.807562 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372"} err="failed to get container status \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\": not found" Mar 14 00:18:18.807753 kubelet[2624]: I0314 00:18:18.807575 2624 scope.go:117] "RemoveContainer" containerID="7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d" Mar 14 00:18:18.808298 containerd[1484]: time="2026-03-14T00:18:18.807404094Z" level=error msg="ContainerStatus for \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e329262f4fa9f378c8454debb81a3cfe6b882f1b98c48d01ab1c83ab1590372\": not found" Mar 14 00:18:18.808564 containerd[1484]: time="2026-03-14T00:18:18.808182296Z" level=error msg="ContainerStatus for \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\": not found" Mar 14 00:18:18.809037 kubelet[2624]: E0314 00:18:18.808620 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\": not found" containerID="7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d" Mar 14 00:18:18.809037 kubelet[2624]: I0314 00:18:18.808639 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d"} err="failed to get container status \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bb03294d6b3d1d88d2e7bdab9ac6c8a04d347768ffdab7af3eed1e00a5a380d\": not found" Mar 14 00:18:18.911798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb20ccd37a27472f17f2abd39ef5012a6beb2aa2d8e37023dfdd755994258c12-rootfs.mount: Deactivated successfully. Mar 14 00:18:18.912011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d759f4cc6a6c26ec9fb3fd58788ae9e89ad8e6c252e250448d0f465930d3d1-rootfs.mount: Deactivated successfully. Mar 14 00:18:18.912131 systemd[1]: var-lib-kubelet-pods-45336a9c\x2d2c47\x2d4ea2\x2d91a1\x2dceecce85a2d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtx486.mount: Deactivated successfully. Mar 14 00:18:18.912252 systemd[1]: var-lib-kubelet-pods-e9815b12\x2df48c\x2d4978\x2d9810\x2d3e75217b21ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvsjk.mount: Deactivated successfully. Mar 14 00:18:18.912353 systemd[1]: var-lib-kubelet-pods-45336a9c\x2d2c47\x2d4ea2\x2d91a1\x2dceecce85a2d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:18:18.912460 systemd[1]: var-lib-kubelet-pods-45336a9c\x2d2c47\x2d4ea2\x2d91a1\x2dceecce85a2d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:18:19.930799 sshd[4224]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:19.936018 systemd[1]: sshd@27-168.119.153.241:22-68.220.241.50:39152.service: Deactivated successfully. Mar 14 00:18:19.938977 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:18:19.939169 systemd[1]: session-21.scope: Consumed 1.024s CPU time. Mar 14 00:18:19.939894 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:18:19.940863 systemd-logind[1452]: Removed session 21. Mar 14 00:18:20.041826 systemd[1]: Started sshd@28-168.119.153.241:22-68.220.241.50:39160.service - OpenSSH per-connection server daemon (68.220.241.50:39160). Mar 14 00:18:20.238215 kubelet[2624]: I0314 00:18:20.238099 2624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45336a9c-2c47-4ea2-91a1-ceecce85a2d1" path="/var/lib/kubelet/pods/45336a9c-2c47-4ea2-91a1-ceecce85a2d1/volumes" Mar 14 00:18:20.238915 kubelet[2624]: I0314 00:18:20.238871 2624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9815b12-f48c-4978-9810-3e75217b21ac" path="/var/lib/kubelet/pods/e9815b12-f48c-4978-9810-3e75217b21ac/volumes" Mar 14 00:18:20.330195 kubelet[2624]: E0314 00:18:20.330133 2624 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:18:20.627404 sshd[4388]: Accepted publickey for core from 68.220.241.50 port 39160 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:20.630058 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:20.635307 systemd-logind[1452]: New session 22 of user core. Mar 14 00:18:20.645824 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:18:22.819030 systemd[1]: Created slice kubepods-burstable-pod8688cba7_5b37_4994_9e3a_5d6fb4d2397a.slice - libcontainer container kubepods-burstable-pod8688cba7_5b37_4994_9e3a_5d6fb4d2397a.slice. Mar 14 00:18:22.865442 sshd[4388]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:22.870072 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:18:22.870848 systemd[1]: sshd@28-168.119.153.241:22-68.220.241.50:39160.service: Deactivated successfully. Mar 14 00:18:22.873659 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:18:22.874025 systemd[1]: session-22.scope: Consumed 1.748s CPU time. Mar 14 00:18:22.875615 systemd-logind[1452]: Removed session 22. Mar 14 00:18:22.948954 kubelet[2624]: I0314 00:18:22.948634 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-cilium-run\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.948954 kubelet[2624]: I0314 00:18:22.948776 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-hostproc\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.948954 kubelet[2624]: I0314 00:18:22.948827 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-clustermesh-secrets\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.948954 kubelet[2624]: I0314 00:18:22.948867 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-cilium-ipsec-secrets\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.948954 kubelet[2624]: I0314 00:18:22.948902 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-etc-cni-netd\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.948976 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-cilium-config-path\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.949046 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-host-proc-sys-net\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.949100 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-host-proc-sys-kernel\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.949135 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-hubble-tls\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.949186 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-bpf-maps\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.949808 kubelet[2624]: I0314 00:18:22.949221 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-cilium-cgroup\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.950102 kubelet[2624]: I0314 00:18:22.949257 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-xtables-lock\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.950102 kubelet[2624]: I0314 00:18:22.949306 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-cni-path\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.950102 kubelet[2624]: I0314 00:18:22.949341 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qj2\" (UniqueName: \"kubernetes.io/projected/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-kube-api-access-k4qj2\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.950102 kubelet[2624]: I0314 00:18:22.949392 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8688cba7-5b37-4994-9e3a-5d6fb4d2397a-lib-modules\") pod \"cilium-7jffr\" (UID: \"8688cba7-5b37-4994-9e3a-5d6fb4d2397a\") " pod="kube-system/cilium-7jffr" Mar 14 00:18:22.980995 systemd[1]: Started sshd@29-168.119.153.241:22-68.220.241.50:43908.service - OpenSSH per-connection server daemon (68.220.241.50:43908). Mar 14 00:18:23.125770 containerd[1484]: time="2026-03-14T00:18:23.124417275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jffr,Uid:8688cba7-5b37-4994-9e3a-5d6fb4d2397a,Namespace:kube-system,Attempt:0,}" Mar 14 00:18:23.149174 containerd[1484]: time="2026-03-14T00:18:23.148800351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:18:23.149174 containerd[1484]: time="2026-03-14T00:18:23.148907791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:18:23.149174 containerd[1484]: time="2026-03-14T00:18:23.148951151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:23.149174 containerd[1484]: time="2026-03-14T00:18:23.149080512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:18:23.170830 systemd[1]: Started cri-containerd-cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56.scope - libcontainer container cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56. Mar 14 00:18:23.200982 containerd[1484]: time="2026-03-14T00:18:23.200911313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jffr,Uid:8688cba7-5b37-4994-9e3a-5d6fb4d2397a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\"" Mar 14 00:18:23.208110 containerd[1484]: time="2026-03-14T00:18:23.207972335Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:18:23.217839 containerd[1484]: time="2026-03-14T00:18:23.217755406Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca\"" Mar 14 00:18:23.219908 containerd[1484]: time="2026-03-14T00:18:23.218934769Z" level=info msg="StartContainer for \"6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca\"" Mar 14 00:18:23.247755 systemd[1]: Started cri-containerd-6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca.scope - libcontainer container 6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca. Mar 14 00:18:23.275235 containerd[1484]: time="2026-03-14T00:18:23.275196704Z" level=info msg="StartContainer for \"6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca\" returns successfully" Mar 14 00:18:23.284728 systemd[1]: cri-containerd-6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca.scope: Deactivated successfully. Mar 14 00:18:23.321053 containerd[1484]: time="2026-03-14T00:18:23.320647726Z" level=info msg="shim disconnected" id=6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca namespace=k8s.io Mar 14 00:18:23.321053 containerd[1484]: time="2026-03-14T00:18:23.320776526Z" level=warning msg="cleaning up after shim disconnected" id=6372888ffe15d51384218e11f62eaa37e05ad597ffb3f3f1648e3342b2053eca namespace=k8s.io Mar 14 00:18:23.321053 containerd[1484]: time="2026-03-14T00:18:23.320797526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:23.563476 sshd[4399]: Accepted publickey for core from 68.220.241.50 port 43908 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:23.565690 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:23.571257 systemd-logind[1452]: New session 23 of user core. Mar 14 00:18:23.576747 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:18:23.781600 containerd[1484]: time="2026-03-14T00:18:23.781555120Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:18:23.790914 containerd[1484]: time="2026-03-14T00:18:23.790851309Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3\"" Mar 14 00:18:23.792424 containerd[1484]: time="2026-03-14T00:18:23.791680512Z" level=info msg="StartContainer for \"3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3\"" Mar 14 00:18:23.818798 systemd[1]: Started cri-containerd-3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3.scope - libcontainer container 3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3. Mar 14 00:18:23.848868 containerd[1484]: time="2026-03-14T00:18:23.848742970Z" level=info msg="StartContainer for \"3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3\" returns successfully" Mar 14 00:18:23.852860 systemd[1]: cri-containerd-3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3.scope: Deactivated successfully. Mar 14 00:18:23.885143 containerd[1484]: time="2026-03-14T00:18:23.885060563Z" level=info msg="shim disconnected" id=3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3 namespace=k8s.io Mar 14 00:18:23.885143 containerd[1484]: time="2026-03-14T00:18:23.885136003Z" level=warning msg="cleaning up after shim disconnected" id=3218e2e0901ec8e9e1f9fe9036982aa4e3e1c4636e536da132c9db52e46b28d3 namespace=k8s.io Mar 14 00:18:23.885143 containerd[1484]: time="2026-03-14T00:18:23.885146963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:23.897465 containerd[1484]: time="2026-03-14T00:18:23.897413801Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:18:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:18:23.982156 sshd[4399]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:23.988475 systemd[1]: sshd@29-168.119.153.241:22-68.220.241.50:43908.service: Deactivated successfully. Mar 14 00:18:23.991080 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:18:23.992142 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:18:23.993740 systemd-logind[1452]: Removed session 23. Mar 14 00:18:24.094070 systemd[1]: Started sshd@30-168.119.153.241:22-68.220.241.50:43910.service - OpenSSH per-connection server daemon (68.220.241.50:43910). Mar 14 00:18:24.504469 kubelet[2624]: I0314 00:18:24.504003 2624 setters.go:618] "Node became not ready" node="ci-4081-3-6-n-c13e9e2860" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:18:24Z","lastTransitionTime":"2026-03-14T00:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:18:24.683357 sshd[4575]: Accepted publickey for core from 68.220.241.50 port 43910 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:18:24.684593 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:24.689427 systemd-logind[1452]: New session 24 of user core. Mar 14 00:18:24.696736 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:18:24.788854 containerd[1484]: time="2026-03-14T00:18:24.788807728Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:18:24.803379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425273153.mount: Deactivated successfully. Mar 14 00:18:24.806284 containerd[1484]: time="2026-03-14T00:18:24.806169542Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204\"" Mar 14 00:18:24.807224 containerd[1484]: time="2026-03-14T00:18:24.807104265Z" level=info msg="StartContainer for \"50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204\"" Mar 14 00:18:24.846677 systemd[1]: Started cri-containerd-50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204.scope - libcontainer container 50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204. Mar 14 00:18:24.877553 containerd[1484]: time="2026-03-14T00:18:24.877219123Z" level=info msg="StartContainer for \"50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204\" returns successfully" Mar 14 00:18:24.881279 systemd[1]: cri-containerd-50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204.scope: Deactivated successfully. Mar 14 00:18:24.907987 containerd[1484]: time="2026-03-14T00:18:24.907913018Z" level=info msg="shim disconnected" id=50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204 namespace=k8s.io Mar 14 00:18:24.907987 containerd[1484]: time="2026-03-14T00:18:24.907986978Z" level=warning msg="cleaning up after shim disconnected" id=50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204 namespace=k8s.io Mar 14 00:18:24.908567 containerd[1484]: time="2026-03-14T00:18:24.908002738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:25.058428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ee13e110c850487118c247648f3db76da9c5112e76d49d8dc0fde790b12204-rootfs.mount: Deactivated successfully. Mar 14 00:18:25.331241 kubelet[2624]: E0314 00:18:25.331090 2624 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:18:25.798094 containerd[1484]: time="2026-03-14T00:18:25.798041894Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:18:25.813655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754966975.mount: Deactivated successfully. Mar 14 00:18:25.816094 containerd[1484]: time="2026-03-14T00:18:25.816052310Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db\"" Mar 14 00:18:25.818516 containerd[1484]: time="2026-03-14T00:18:25.817455514Z" level=info msg="StartContainer for \"f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db\"" Mar 14 00:18:25.863841 systemd[1]: Started cri-containerd-f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db.scope - libcontainer container f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db. Mar 14 00:18:25.903005 systemd[1]: cri-containerd-f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db.scope: Deactivated successfully. Mar 14 00:18:25.905175 containerd[1484]: time="2026-03-14T00:18:25.904788624Z" level=info msg="StartContainer for \"f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db\" returns successfully" Mar 14 00:18:25.937920 containerd[1484]: time="2026-03-14T00:18:25.937631846Z" level=info msg="shim disconnected" id=f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db namespace=k8s.io Mar 14 00:18:25.937920 containerd[1484]: time="2026-03-14T00:18:25.937725566Z" level=warning msg="cleaning up after shim disconnected" id=f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db namespace=k8s.io Mar 14 00:18:25.937920 containerd[1484]: time="2026-03-14T00:18:25.937739006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:26.060202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6b86f17647643ba28186e83c1821379516cfcbd4e50113bed66cd3b4ebc93db-rootfs.mount: Deactivated successfully. Mar 14 00:18:26.799961 containerd[1484]: time="2026-03-14T00:18:26.799902908Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:18:26.815871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061769756.mount: Deactivated successfully. Mar 14 00:18:26.817755 containerd[1484]: time="2026-03-14T00:18:26.817702163Z" level=info msg="CreateContainer within sandbox \"cd9221bfa6890239b5eff815a0beb4b867ec9d3772a7801140e9c78e3badbb56\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87\"" Mar 14 00:18:26.819528 containerd[1484]: time="2026-03-14T00:18:26.819141127Z" level=info msg="StartContainer for \"de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87\"" Mar 14 00:18:26.851784 systemd[1]: Started cri-containerd-de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87.scope - libcontainer container de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87. Mar 14 00:18:26.881469 containerd[1484]: time="2026-03-14T00:18:26.880826318Z" level=info msg="StartContainer for \"de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87\" returns successfully" Mar 14 00:18:27.214552 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 14 00:18:29.248486 systemd[1]: run-containerd-runc-k8s.io-de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87-runc.7GYUUw.mount: Deactivated successfully. Mar 14 00:18:30.224365 systemd-networkd[1364]: lxc_health: Link UP Mar 14 00:18:30.229726 systemd-networkd[1364]: lxc_health: Gained carrier Mar 14 00:18:31.150015 kubelet[2624]: I0314 00:18:31.149944 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7jffr" podStartSLOduration=9.149930771 podStartE2EDuration="9.149930771s" podCreationTimestamp="2026-03-14 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:18:27.82984152 +0000 UTC m=+197.730909402" watchObservedRunningTime="2026-03-14 00:18:31.149930771 +0000 UTC m=+201.050998653" Mar 14 00:18:31.423002 systemd[1]: run-containerd-runc-k8s.io-de1d447f201f0e0eb3f3381c4964430db0d36342958ab9024671ac9cc55f6b87-runc.A0vsB4.mount: Deactivated successfully. Mar 14 00:18:31.498159 systemd-networkd[1364]: lxc_health: Gained IPv6LL Mar 14 00:18:35.877569 sshd[4575]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:35.883270 systemd[1]: sshd@30-168.119.153.241:22-68.220.241.50:43910.service: Deactivated successfully. Mar 14 00:18:35.888387 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:18:35.890254 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:18:35.891633 systemd-logind[1452]: Removed session 24.