Mar 14 00:13:20.903483 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 14 00:13:20.903533 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:20.903545 kernel: KASLR enabled Mar 14 00:13:20.903551 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:13:20.903556 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 14 00:13:20.903562 kernel: random: crng init done Mar 14 00:13:20.903569 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:20.903590 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 14 00:13:20.903596 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:13:20.903604 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903611 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903617 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903622 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903629 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903636 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903644 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903651 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903657 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:20.903663 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:13:20.903670 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 14 00:13:20.903676 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:20.903682 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:13:20.903689 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 14 00:13:20.903695 kernel: Zone ranges: Mar 14 00:13:20.903701 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:20.903709 kernel: DMA32 empty Mar 14 00:13:20.903715 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 14 00:13:20.903721 kernel: Movable zone start for each node Mar 14 00:13:20.903728 kernel: Early memory node ranges Mar 14 00:13:20.903734 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 14 00:13:20.903740 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 14 00:13:20.903747 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 14 00:13:20.903753 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 14 00:13:20.903759 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 14 00:13:20.903766 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 14 00:13:20.903772 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 14 00:13:20.903778 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:13:20.903786 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:13:20.903792 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:20.903799 kernel: psci: PSCIv1.1 detected in firmware. Mar 14 00:13:20.903808 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:20.903815 kernel: psci: Trusted OS migration not required Mar 14 00:13:20.903821 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:20.903830 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 14 00:13:20.903837 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:20.903843 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:20.903850 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:20.903866 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:20.903873 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:20.903880 kernel: CPU features: detected: Hardware dirty bit management Mar 14 00:13:20.903887 kernel: CPU features: detected: Spectre-v4 Mar 14 00:13:20.903893 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:20.903900 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 14 00:13:20.903909 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 14 00:13:20.903916 kernel: CPU features: detected: ARM erratum 1418040 Mar 14 00:13:20.903923 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 14 00:13:20.903930 kernel: alternatives: applying boot alternatives Mar 14 00:13:20.903938 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:20.903945 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:20.903951 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:20.903958 kernel: Fallback order for Node 0: 0 Mar 14 00:13:20.903965 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 14 00:13:20.903972 kernel: Policy zone: Normal Mar 14 00:13:20.903978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:20.903987 kernel: software IO TLB: area num 2. Mar 14 00:13:20.903994 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 14 00:13:20.904001 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Mar 14 00:13:20.904008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:20.904015 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:20.904022 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:20.904029 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:20.904036 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:20.904043 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:20.904050 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:20.904056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:20.904063 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:20.907618 kernel: GICv3: 256 SPIs implemented Mar 14 00:13:20.907649 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:20.907657 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:20.907664 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 14 00:13:20.907671 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 14 00:13:20.907678 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 14 00:13:20.907685 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:20.907692 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:20.907699 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 14 00:13:20.907706 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 14 00:13:20.907714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:20.907728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:13:20.907735 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 14 00:13:20.907742 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 14 00:13:20.907749 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 14 00:13:20.907756 kernel: Console: colour dummy device 80x25 Mar 14 00:13:20.907763 kernel: ACPI: Core revision 20230628 Mar 14 00:13:20.907771 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 14 00:13:20.907778 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:20.907785 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:20.907792 kernel: landlock: Up and running. Mar 14 00:13:20.907800 kernel: SELinux: Initializing. Mar 14 00:13:20.907807 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:20.907814 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:20.907821 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:20.907829 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:20.907836 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:20.907844 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:20.907851 kernel: Platform MSI: ITS@0x8080000 domain created Mar 14 00:13:20.907900 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 14 00:13:20.907912 kernel: Remapping and enabling EFI services. Mar 14 00:13:20.907919 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:20.907926 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:20.907933 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 14 00:13:20.907941 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 14 00:13:20.907948 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:13:20.907955 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 14 00:13:20.907962 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:20.907969 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:20.907976 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:20.907984 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 14 00:13:20.907992 kernel: CPU features: detected: Common not Private translations Mar 14 00:13:20.908005 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:20.908014 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 14 00:13:20.908021 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 14 00:13:20.908029 kernel: CPU features: detected: LSE atomic instructions Mar 14 00:13:20.908036 kernel: CPU features: detected: Privileged Access Never Mar 14 00:13:20.908043 kernel: CPU features: detected: RAS Extension Support Mar 14 00:13:20.908053 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 14 00:13:20.908060 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:20.908068 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:20.908075 kernel: devtmpfs: initialized Mar 14 00:13:20.908083 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:20.908091 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:20.908098 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:20.908106 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:20.908115 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 14 00:13:20.908122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:20.908130 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:20.908137 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:20.908145 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:20.908153 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:20.908160 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:20.908168 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:20.908175 kernel: cpuidle: using governor menu Mar 14 00:13:20.908184 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:20.908192 kernel: ASID allocator initialised with 32768 entries Mar 14 00:13:20.908200 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:20.908207 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:20.908215 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 14 00:13:20.908222 kernel: Modules: 0 pages in range for non-PLT usage Mar 14 00:13:20.908230 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:20.908238 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:20.908245 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:20.908254 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:20.908262 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:20.908269 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:20.908277 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:20.908285 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:20.908293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:20.908300 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:20.908307 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:20.908315 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:20.908324 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:20.908332 kernel: ACPI: Interpreter enabled Mar 14 00:13:20.908339 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:20.908394 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:20.908408 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 14 00:13:20.908417 kernel: printk: console [ttyAMA0] enabled Mar 14 00:13:20.908425 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:13:20.908664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:20.908757 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:20.908829 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:20.908914 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 14 00:13:20.908982 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 14 00:13:20.908992 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 14 00:13:20.909000 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:20.909077 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 14 00:13:20.909138 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:20.909230 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 14 00:13:20.911324 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:13:20.911495 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 14 00:13:20.911618 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 14 00:13:20.911700 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 14 00:13:20.911772 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:13:20.911882 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.911956 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 14 00:13:20.912037 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912106 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 14 00:13:20.912182 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912263 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 14 00:13:20.912349 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912425 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 14 00:13:20.912507 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912592 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 14 00:13:20.912671 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912740 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 14 00:13:20.912819 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.912940 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 14 00:13:20.913019 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.913087 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 14 00:13:20.913161 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:13:20.913376 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 14 00:13:20.913515 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 14 00:13:20.916801 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 14 00:13:20.916956 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:13:20.917034 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 14 00:13:20.917105 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:13:20.917174 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:13:20.917256 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:13:20.917338 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 14 00:13:20.917416 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:13:20.917486 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 14 00:13:20.919658 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 14 00:13:20.919822 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:13:20.919962 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 14 00:13:20.920057 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:13:20.920150 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 14 00:13:20.920224 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 14 00:13:20.920304 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:13:20.920375 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 14 00:13:20.920444 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:13:20.920529 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:13:20.921706 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 14 00:13:20.921802 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 14 00:13:20.921893 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:13:20.921981 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 14 00:13:20.922052 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:13:20.922119 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:13:20.922227 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 14 00:13:20.922298 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 14 00:13:20.922368 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 14 00:13:20.922440 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 00:13:20.922516 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:13:20.922618 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:13:20.922693 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 00:13:20.922761 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 14 00:13:20.922841 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 14 00:13:20.922990 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 00:13:20.923075 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:13:20.923156 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:13:20.923228 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 00:13:20.923304 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:13:20.923373 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:13:20.923451 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:13:20.923518 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:13:20.925680 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:13:20.925798 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:13:20.925939 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:13:20.926012 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:13:20.926084 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:13:20.926153 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:13:20.926228 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:13:20.926297 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 14 00:13:20.926365 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:20.926525 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 14 00:13:20.926631 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:20.926728 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 14 00:13:20.926797 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:20.926896 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 14 00:13:20.926969 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:20.927040 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 14 00:13:20.927106 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:20.927175 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 14 00:13:20.927255 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:20.927333 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 14 00:13:20.927401 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:20.927491 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 14 00:13:20.927686 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:20.927809 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 14 00:13:20.927907 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:20.927982 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 14 00:13:20.928057 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 14 00:13:20.928125 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 14 00:13:20.928189 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:13:20.928268 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 14 00:13:20.928351 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:13:20.928494 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 14 00:13:20.928600 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:13:20.928814 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 14 00:13:20.928963 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 00:13:20.929042 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 14 00:13:20.929109 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 00:13:20.929178 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 14 00:13:20.929244 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 00:13:20.929323 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 14 00:13:20.929446 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 00:13:20.929546 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 14 00:13:20.929693 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 00:13:20.929791 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 14 00:13:20.929939 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 14 00:13:20.930099 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 14 00:13:20.930188 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 14 00:13:20.930751 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:13:20.930823 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 14 00:13:20.930955 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:13:20.931038 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 00:13:20.931104 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 14 00:13:20.931254 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:20.931339 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 14 00:13:20.931433 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:13:20.931507 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 00:13:20.931590 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 14 00:13:20.935756 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:20.935942 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:13:20.936023 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 14 00:13:20.936097 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:13:20.936164 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 00:13:20.936241 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 14 00:13:20.936308 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:20.936383 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:13:20.936453 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:13:20.936520 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 00:13:20.936631 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 14 00:13:20.936704 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:20.936879 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 14 00:13:20.936982 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 14 00:13:20.937054 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:13:20.937122 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 00:13:20.937188 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 14 00:13:20.937301 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:20.937393 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 14 00:13:20.937464 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 14 00:13:20.937534 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:13:20.939736 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 00:13:20.939958 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 14 00:13:20.940035 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:20.940112 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 14 00:13:20.940182 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 14 00:13:20.940251 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 14 00:13:20.940322 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:13:20.940388 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 00:13:20.940464 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 14 00:13:20.940530 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:20.942443 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:13:20.942521 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 00:13:20.942620 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 14 00:13:20.942693 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:20.942763 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:13:20.942833 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 14 00:13:20.942937 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 14 00:13:20.943009 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:20.943088 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 14 00:13:20.943151 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:20.943211 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 14 00:13:20.943291 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 00:13:20.943976 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 14 00:13:20.944056 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:13:20.944129 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 14 00:13:20.944191 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 14 00:13:20.944250 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:13:20.944319 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 14 00:13:20.944380 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 14 00:13:20.944442 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:13:20.945034 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 14 00:13:20.945128 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 14 00:13:20.945223 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:13:20.945297 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 14 00:13:20.945388 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 14 00:13:20.945464 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:13:20.945540 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 14 00:13:20.945621 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 14 00:13:20.945689 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:13:20.945759 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 14 00:13:20.946436 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 14 00:13:20.946512 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:13:20.946663 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 14 00:13:20.946737 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 14 00:13:20.946798 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:13:20.946918 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 14 00:13:20.946990 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 14 00:13:20.947060 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:13:20.947070 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:20.947078 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:20.947086 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:20.947094 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:20.947102 kernel: iommu: Default domain type: Translated Mar 14 00:13:20.947110 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:20.947118 kernel: efivars: Registered efivars operations Mar 14 00:13:20.947126 kernel: vgaarb: loaded Mar 14 00:13:20.947141 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:20.947149 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:20.947157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:20.947167 kernel: pnp: PnP ACPI init Mar 14 00:13:20.947302 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 14 00:13:20.947321 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:20.947330 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:20.947339 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:20.947351 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:20.947360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:20.947368 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:20.947376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:20.947384 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:20.947391 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:20.947399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:20.947407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:20.947560 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 14 00:13:20.947620 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:20.947631 kernel: kvm [1]: HYP mode not available Mar 14 00:13:20.947639 kernel: Initialise system trusted keyrings Mar 14 00:13:20.947647 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:20.947655 kernel: Key type asymmetric registered Mar 14 00:13:20.947663 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:20.947670 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:20.947678 kernel: io scheduler mq-deadline registered Mar 14 00:13:20.947686 kernel: io scheduler kyber registered Mar 14 00:13:20.947697 kernel: io scheduler bfq registered Mar 14 00:13:20.947705 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:20.947812 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 14 00:13:20.947911 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 14 00:13:20.947983 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.948054 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 14 00:13:20.948124 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 14 00:13:20.948197 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.948268 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 14 00:13:20.948337 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 14 00:13:20.948405 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.948476 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 14 00:13:20.948544 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 14 00:13:20.948769 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.948849 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 14 00:13:20.948939 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 14 00:13:20.949010 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.949731 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 14 00:13:20.949956 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 14 00:13:20.950110 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.950249 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 14 00:13:20.950332 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 14 00:13:20.950401 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.950472 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 14 00:13:20.950539 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 14 00:13:20.950657 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.950672 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 14 00:13:20.950742 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 14 00:13:20.950809 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 14 00:13:20.950892 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:13:20.950904 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:20.950917 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:20.950925 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:20.951032 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 14 00:13:20.951112 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 14 00:13:20.951124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:20.951133 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:20.951213 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 14 00:13:20.951225 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 14 00:13:20.951239 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:20.951251 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:20.951259 kernel: nicpf, ver 1.0 Mar 14 00:13:20.951266 kernel: nicvf, ver 1.0 Mar 14 00:13:20.951351 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:20.951419 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:20 UTC (1773447200) Mar 14 00:13:20.951430 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:20.951438 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 14 00:13:20.951446 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:20.951456 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:20.951464 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:20.951472 kernel: Segment Routing with IPv6 Mar 14 00:13:20.951480 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:20.951488 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:20.951496 kernel: Key type dns_resolver registered Mar 14 00:13:20.951503 kernel: registered taskstats version 1 Mar 14 00:13:20.951511 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:20.951519 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:20.951529 kernel: Key type .fscrypt registered Mar 14 00:13:20.951537 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:20.951545 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:20.951552 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:20.951560 kernel: ima: No architecture policies found Mar 14 00:13:20.951568 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:20.951589 kernel: clk: Disabling unused clocks Mar 14 00:13:20.951597 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:20.951605 kernel: Run /init as init process Mar 14 00:13:20.951612 kernel: with arguments: Mar 14 00:13:20.951622 kernel: /init Mar 14 00:13:20.951630 kernel: with environment: Mar 14 00:13:20.951637 kernel: HOME=/ Mar 14 00:13:20.951645 kernel: TERM=linux Mar 14 00:13:20.951655 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:20.951666 systemd[1]: Detected virtualization kvm. Mar 14 00:13:20.951674 systemd[1]: Detected architecture arm64. Mar 14 00:13:20.951683 systemd[1]: Running in initrd. Mar 14 00:13:20.951691 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:20.951699 systemd[1]: Hostname set to . Mar 14 00:13:20.951708 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:20.951716 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:20.951724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:20.951733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:20.951742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:20.951752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:20.951761 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:20.951769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:20.951779 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:20.951788 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:20.951796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:20.951804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:20.951814 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:20.951822 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:20.951831 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:20.951839 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:20.951847 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:20.951865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:20.951909 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:20.951922 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:20.951939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:20.951952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:20.951961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:20.951970 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:20.951978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:20.951986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:20.951996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:20.952005 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:20.952013 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:20.952030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:20.952038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:20.952047 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:20.952085 systemd-journald[236]: Collecting audit messages is disabled. Mar 14 00:13:20.952111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:20.952126 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:20.952136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:20.952145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:20.952156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:20.952174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:20.952182 kernel: Bridge firewalling registered Mar 14 00:13:20.952191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:20.952199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:20.952209 systemd-journald[236]: Journal started Mar 14 00:13:20.952231 systemd-journald[236]: Runtime Journal (/run/log/journal/c3c9dd3875734f5eb38a3033741c1f90) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:13:20.925388 systemd-modules-load[237]: Inserted module 'overlay' Mar 14 00:13:20.947377 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 14 00:13:20.961009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:20.964269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:20.964319 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:20.965349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:20.975050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:20.979422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:20.982725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:20.992999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:20.998985 dracut-cmdline[265]: dracut-dracut-053 Mar 14 00:13:21.001733 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:21.007441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:21.015994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:21.047998 systemd-resolved[293]: Positive Trust Anchors: Mar 14 00:13:21.048632 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:21.048676 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:21.054495 systemd-resolved[293]: Defaulting to hostname 'linux'. Mar 14 00:13:21.056594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:21.057282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:21.094602 kernel: SCSI subsystem initialized Mar 14 00:13:21.098600 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:21.106606 kernel: iscsi: registered transport (tcp) Mar 14 00:13:21.121748 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:21.121867 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:21.180630 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:21.183742 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:21.206773 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:21.206867 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:21.206890 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:21.260679 kernel: raid6: neonx8 gen() 15332 MB/s Mar 14 00:13:21.275640 kernel: raid6: neonx4 gen() 15186 MB/s Mar 14 00:13:21.292742 kernel: raid6: neonx2 gen() 12889 MB/s Mar 14 00:13:21.309649 kernel: raid6: neonx1 gen() 10231 MB/s Mar 14 00:13:21.326646 kernel: raid6: int64x8 gen() 6766 MB/s Mar 14 00:13:21.343661 kernel: raid6: int64x4 gen() 7142 MB/s Mar 14 00:13:21.360632 kernel: raid6: int64x2 gen() 5989 MB/s Mar 14 00:13:21.377649 kernel: raid6: int64x1 gen() 4935 MB/s Mar 14 00:13:21.377726 kernel: raid6: using algorithm neonx8 gen() 15332 MB/s Mar 14 00:13:21.394645 kernel: raid6: .... xor() 11702 MB/s, rmw enabled Mar 14 00:13:21.394737 kernel: raid6: using neon recovery algorithm Mar 14 00:13:21.399863 kernel: xor: measuring software checksum speed Mar 14 00:13:21.399945 kernel: 8regs : 19740 MB/sec Mar 14 00:13:21.400746 kernel: 32regs : 18696 MB/sec Mar 14 00:13:21.400787 kernel: arm64_neon : 26998 MB/sec Mar 14 00:13:21.400823 kernel: xor: using function: arm64_neon (26998 MB/sec) Mar 14 00:13:21.454647 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:21.469662 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:21.476781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:21.501416 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 14 00:13:21.505056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:21.512755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:21.540467 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 14 00:13:21.578628 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:21.585899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:21.637252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:21.643981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:21.663800 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:21.664627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:21.665543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:21.668265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:21.675793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:21.690367 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:21.742616 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:13:21.744901 kernel: ACPI: bus type USB registered Mar 14 00:13:21.744948 kernel: usbcore: registered new interface driver usbfs Mar 14 00:13:21.744960 kernel: usbcore: registered new interface driver hub Mar 14 00:13:21.744969 kernel: usbcore: registered new device driver usb Mar 14 00:13:21.751772 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:13:21.753604 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:13:21.763288 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:21.763367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:21.766696 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:21.770386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:21.770462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:21.772032 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:21.777799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:21.785594 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:13:21.785796 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:13:21.785941 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:13:21.786982 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:13:21.787126 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:13:21.789616 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:13:21.789802 kernel: hub 1-0:1.0: USB hub found Mar 14 00:13:21.791650 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:13:21.793712 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:13:21.794415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:21.797004 kernel: hub 2-0:1.0: USB hub found Mar 14 00:13:21.797172 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:13:21.806837 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:21.825090 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 14 00:13:21.825316 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 14 00:13:21.826706 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:13:21.829036 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:13:21.830808 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 14 00:13:21.831382 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 14 00:13:21.831513 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 14 00:13:21.831667 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 14 00:13:21.832193 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:13:21.832533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:21.836606 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:21.836645 kernel: GPT:17805311 != 80003071 Mar 14 00:13:21.836656 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:21.837830 kernel: GPT:17805311 != 80003071 Mar 14 00:13:21.837867 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:21.837878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:21.838632 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 14 00:13:21.873632 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (508) Mar 14 00:13:21.879977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:13:21.884674 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (524) Mar 14 00:13:21.891996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:13:21.892781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:13:21.904939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:13:21.912915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:13:21.925147 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:21.937313 disk-uuid[575]: Primary Header is updated. Mar 14 00:13:21.937313 disk-uuid[575]: Secondary Entries is updated. Mar 14 00:13:21.937313 disk-uuid[575]: Secondary Header is updated. Mar 14 00:13:21.945615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:21.951605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:22.031618 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:13:22.167907 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 14 00:13:22.167987 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:13:22.168291 kernel: usbcore: registered new interface driver usbhid Mar 14 00:13:22.168315 kernel: usbhid: USB HID core driver Mar 14 00:13:22.274643 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 14 00:13:22.401659 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 14 00:13:22.455633 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 14 00:13:22.962530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:13:22.964600 disk-uuid[576]: The operation has completed successfully. Mar 14 00:13:23.010395 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:23.011351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:23.032973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:23.038462 sh[594]: Success Mar 14 00:13:23.055637 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:23.114984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:23.116697 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:23.120926 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:23.136939 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:23.136996 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:23.137007 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:23.137995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:23.138171 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:23.144596 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:23.146120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:23.148122 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:23.156872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:23.161842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:23.174254 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:23.174310 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:23.174326 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:23.181623 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:23.181682 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:23.197436 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:23.199620 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:23.207450 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:23.216361 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:23.294689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:23.304819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:23.322464 ignition[703]: Ignition 2.19.0 Mar 14 00:13:23.322477 ignition[703]: Stage: fetch-offline Mar 14 00:13:23.322529 ignition[703]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.322537 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:23.322766 ignition[703]: parsed url from cmdline: "" Mar 14 00:13:23.322770 ignition[703]: no config URL provided Mar 14 00:13:23.322775 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:23.322783 ignition[703]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:23.327766 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:23.322789 ignition[703]: failed to fetch config: resource requires networking Mar 14 00:13:23.323048 ignition[703]: Ignition finished successfully Mar 14 00:13:23.334118 systemd-networkd[781]: lo: Link UP Mar 14 00:13:23.334132 systemd-networkd[781]: lo: Gained carrier Mar 14 00:13:23.335954 systemd-networkd[781]: Enumeration completed Mar 14 00:13:23.336139 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:23.337186 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:23.337189 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:23.338090 systemd[1]: Reached target network.target - Network. Mar 14 00:13:23.339508 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:23.339511 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:23.340246 systemd-networkd[781]: eth0: Link UP Mar 14 00:13:23.340249 systemd-networkd[781]: eth0: Gained carrier Mar 14 00:13:23.340256 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:23.347005 systemd-networkd[781]: eth1: Link UP Mar 14 00:13:23.347008 systemd-networkd[781]: eth1: Gained carrier Mar 14 00:13:23.347018 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:23.347162 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:13:23.362465 ignition[784]: Ignition 2.19.0 Mar 14 00:13:23.362511 ignition[784]: Stage: fetch Mar 14 00:13:23.362704 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.362714 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:23.362804 ignition[784]: parsed url from cmdline: "" Mar 14 00:13:23.362807 ignition[784]: no config URL provided Mar 14 00:13:23.362811 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:23.362818 ignition[784]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:23.362852 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:13:23.363485 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:13:23.384692 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:13:23.400746 systemd-networkd[781]: eth0: DHCPv4 address 159.69.119.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:13:23.564534 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:13:23.567585 ignition[784]: GET result: OK Mar 14 00:13:23.567711 ignition[784]: parsing config with SHA512: d20f1274d54b5f4ed74b575bf554c767719695093b03cfb37080d299c2a0b97b30b485101bf29ecdaa32db9caeef6bfbe2952ef37a62e843e0c25a5caa8c29ce Mar 14 00:13:23.572888 unknown[784]: fetched base config from "system" Mar 14 00:13:23.572902 unknown[784]: fetched base config from "system" Mar 14 00:13:23.572907 unknown[784]: fetched user config from "hetzner" Mar 14 00:13:23.576260 ignition[784]: fetch: fetch complete Mar 14 00:13:23.576276 ignition[784]: fetch: fetch passed Mar 14 00:13:23.576348 ignition[784]: Ignition finished successfully Mar 14 00:13:23.578373 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:13:23.585810 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:23.602495 ignition[791]: Ignition 2.19.0 Mar 14 00:13:23.602508 ignition[791]: Stage: kargs Mar 14 00:13:23.602715 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.602725 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:23.606227 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:23.603932 ignition[791]: kargs: kargs passed Mar 14 00:13:23.603994 ignition[791]: Ignition finished successfully Mar 14 00:13:23.615967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:23.631727 ignition[798]: Ignition 2.19.0 Mar 14 00:13:23.631738 ignition[798]: Stage: disks Mar 14 00:13:23.632093 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.632104 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:23.633771 ignition[798]: disks: disks passed Mar 14 00:13:23.636570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:23.633835 ignition[798]: Ignition finished successfully Mar 14 00:13:23.637918 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:23.639252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:23.640830 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:23.641881 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:23.643016 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:23.656569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:23.675191 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:13:23.680434 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:23.690766 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:23.743843 kernel: EXT4-fs (sda9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:13:23.745088 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:23.747094 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:23.755774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:23.760491 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:23.764317 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:13:23.771159 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Mar 14 00:13:23.771186 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:23.771198 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:23.771209 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:23.765179 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:23.765210 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:23.775162 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:23.783993 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:23.784080 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:23.786928 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:23.788929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:23.835472 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:23.837758 coreos-metadata[816]: Mar 14 00:13:23.837 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:13:23.840417 coreos-metadata[816]: Mar 14 00:13:23.839 INFO Fetch successful Mar 14 00:13:23.840417 coreos-metadata[816]: Mar 14 00:13:23.839 INFO wrote hostname ci-4081-3-6-n-0dd818c04e to /sysroot/etc/hostname Mar 14 00:13:23.842922 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:13:23.847073 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:23.852261 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:23.857803 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:23.966356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:23.978825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:23.984967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:23.993611 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:24.017515 ignition[931]: INFO : Ignition 2.19.0 Mar 14 00:13:24.018013 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:24.020758 ignition[931]: INFO : Stage: mount Mar 14 00:13:24.020758 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:24.020758 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:24.023541 ignition[931]: INFO : mount: mount passed Mar 14 00:13:24.023541 ignition[931]: INFO : Ignition finished successfully Mar 14 00:13:24.024687 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:24.029753 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:24.137481 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:24.141772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:24.156645 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 14 00:13:24.159064 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:24.159123 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:24.159151 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:13:24.162616 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:13:24.162680 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:13:24.166226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:24.192047 ignition[960]: INFO : Ignition 2.19.0 Mar 14 00:13:24.192047 ignition[960]: INFO : Stage: files Mar 14 00:13:24.193822 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:24.193822 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:24.193822 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:24.198912 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:24.198912 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:24.201722 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:24.202866 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:24.203804 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:24.203634 unknown[960]: wrote ssh authorized keys file for user: core Mar 14 00:13:24.207551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:13:24.207551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:13:24.207551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:24.207551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:24.306374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:13:24.391513 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:24.392688 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:24.392688 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:24.623649 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:24.704655 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:24.714675 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 14 00:13:24.788353 systemd-networkd[781]: eth0: Gained IPv6LL Mar 14 00:13:24.933827 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 14 00:13:25.127153 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:25.127153 ignition[960]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 14 00:13:25.131416 ignition[960]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:25.153489 ignition[960]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:25.153489 ignition[960]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:25.153489 ignition[960]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:25.153489 ignition[960]: INFO : files: files passed Mar 14 00:13:25.153489 ignition[960]: INFO : Ignition finished successfully Mar 14 00:13:25.137707 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:25.143782 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:25.149703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:25.151498 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:25.152652 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:25.176355 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:25.176355 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:25.179039 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:25.183624 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:25.185166 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:25.190939 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:25.236210 systemd-networkd[781]: eth1: Gained IPv6LL Mar 14 00:13:25.238905 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:25.239096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:25.241219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:25.242396 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:25.243621 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:25.249860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:25.262394 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:25.269736 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:25.294003 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:25.294739 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:25.295430 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:25.296461 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:25.296592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:25.298200 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:25.298836 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:25.299787 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:25.300961 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:25.302157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:25.303308 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:25.304481 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:25.305773 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:25.306914 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:25.308007 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:25.309089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:25.309214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:25.310820 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:25.311453 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:25.312497 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:25.315686 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:25.317562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:25.317765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:25.319752 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:25.319946 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:25.321420 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:25.321512 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:25.322674 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:13:25.322764 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:13:25.329883 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:25.334830 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:25.335476 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:25.335643 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:25.337121 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:25.337435 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:25.350442 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:25.350686 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:25.357184 ignition[1014]: INFO : Ignition 2.19.0 Mar 14 00:13:25.357184 ignition[1014]: INFO : Stage: umount Mar 14 00:13:25.359255 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:25.359255 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:13:25.359255 ignition[1014]: INFO : umount: umount passed Mar 14 00:13:25.359255 ignition[1014]: INFO : Ignition finished successfully Mar 14 00:13:25.360880 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:25.361010 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:25.362294 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:25.362343 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:25.363689 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:25.363730 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:25.365304 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:13:25.365344 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:13:25.368228 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:25.369085 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:25.369141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:25.370524 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:25.377766 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:25.379180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:25.385969 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:25.387268 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:25.389723 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:25.389776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:25.390888 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:25.390932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:25.391834 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:25.391887 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:25.396445 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:25.396523 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:25.399097 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:25.403469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:25.407056 systemd-networkd[781]: eth1: DHCPv6 lease lost Mar 14 00:13:25.407178 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:25.413115 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 14 00:13:25.413777 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:25.414853 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:25.418997 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:25.420835 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:25.422611 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:25.422677 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:25.430792 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:25.431324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:25.431390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:25.433007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:25.433061 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:25.434148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:25.434197 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:25.434928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:25.434969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:25.435822 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:25.437156 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:25.438915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:25.448716 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:25.448880 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:25.457994 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:25.459371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:25.463216 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:25.463489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:25.466787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:25.466896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:25.468648 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:25.468757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:25.469908 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:25.469964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:25.471407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:25.471452 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:25.473101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:25.473148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:25.483968 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:25.485179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:25.485275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:25.489428 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:13:25.489495 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:25.492229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:25.492283 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:25.494083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:25.494140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:25.496140 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:25.496242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:25.498008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:25.508947 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:25.520979 systemd[1]: Switching root. Mar 14 00:13:25.549658 systemd-journald[236]: Journal stopped Mar 14 00:13:26.552466 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:26.552547 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:26.552560 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:26.552642 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:26.552655 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:26.552665 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:26.552674 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:26.552688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:26.552698 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:26.552707 kernel: audit: type=1403 audit(1773447205.732:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:26.552718 systemd[1]: Successfully loaded SELinux policy in 36.500ms. Mar 14 00:13:26.552739 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.576ms. Mar 14 00:13:26.552751 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:26.552767 systemd[1]: Detected virtualization kvm. Mar 14 00:13:26.552777 systemd[1]: Detected architecture arm64. Mar 14 00:13:26.552805 systemd[1]: Detected first boot. Mar 14 00:13:26.552821 systemd[1]: Hostname set to . Mar 14 00:13:26.552831 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:26.552842 zram_generator::config[1077]: No configuration found. Mar 14 00:13:26.552853 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:26.552863 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:26.552875 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:13:26.552886 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:26.552903 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:26.552916 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:26.552926 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:26.552937 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:26.552952 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:26.552963 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:26.552973 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:26.552983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:26.552994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:26.553006 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:26.553017 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:26.553028 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:26.553039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:26.553050 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 14 00:13:26.553060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:26.553071 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:26.553082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:26.553094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:26.553109 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:26.553119 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:26.553131 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:26.553141 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:26.553152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:26.553163 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:26.553173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:26.553186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:26.553197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:26.553207 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:26.553218 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:26.553229 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:26.553240 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:26.553256 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:26.553268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:26.553279 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:26.553290 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:26.553301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:26.553312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:26.553322 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:26.553333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:26.553345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:26.553356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:26.553367 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:26.553378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:26.553390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:26.553467 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:13:26.553484 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:13:26.553495 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:26.553508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:26.553520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:26.553530 kernel: loop: module loaded Mar 14 00:13:26.553541 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:26.553551 kernel: fuse: init (API version 7.39) Mar 14 00:13:26.553561 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:26.554071 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:26.554106 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:26.554119 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:26.554135 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:26.554146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:26.554157 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:26.554167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:26.554178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:26.554190 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:26.554200 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:26.554212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:26.554223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:26.554234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:26.554245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:26.554255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:26.554266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:26.554279 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:26.554289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:26.554300 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:26.554311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:26.554323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:26.554336 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:26.554348 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:26.554359 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:26.554401 systemd-journald[1158]: Collecting audit messages is disabled. Mar 14 00:13:26.554433 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:26.554445 systemd-journald[1158]: Journal started Mar 14 00:13:26.554468 systemd-journald[1158]: Runtime Journal (/run/log/journal/c3c9dd3875734f5eb38a3033741c1f90) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:13:26.559735 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:26.559820 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:26.570946 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:26.578771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:26.587602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:26.589662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:26.598210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:26.612050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:26.622174 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:26.624861 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:26.625755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:26.627778 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:26.628891 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:26.656196 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:26.667822 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:26.683683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:26.696866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:26.698112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:26.699348 systemd-journald[1158]: Time spent on flushing to /var/log/journal/c3c9dd3875734f5eb38a3033741c1f90 is 35.096ms for 1122 entries. Mar 14 00:13:26.699348 systemd-journald[1158]: System Journal (/var/log/journal/c3c9dd3875734f5eb38a3033741c1f90) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:13:26.755495 systemd-journald[1158]: Received client request to flush runtime journal. Mar 14 00:13:26.702240 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 14 00:13:26.702266 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 14 00:13:26.714059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:26.727836 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:26.735704 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:13:26.760049 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:26.776405 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:26.784903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:26.801113 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 14 00:13:26.801461 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 14 00:13:26.810025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:27.144087 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:13:27.152815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:27.176726 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Mar 14 00:13:27.198200 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:27.207768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:27.228037 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:13:27.279755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:13:27.288251 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 14 00:13:27.382054 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:13:27.401295 systemd-networkd[1242]: lo: Link UP Mar 14 00:13:27.401303 systemd-networkd[1242]: lo: Gained carrier Mar 14 00:13:27.403817 systemd-networkd[1242]: Enumeration completed Mar 14 00:13:27.403987 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:27.406640 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.406646 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:27.407666 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.407672 systemd-networkd[1242]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:27.408331 systemd-networkd[1242]: eth0: Link UP Mar 14 00:13:27.408398 systemd-networkd[1242]: eth0: Gained carrier Mar 14 00:13:27.408451 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.425214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:13:27.427948 systemd-networkd[1242]: eth1: Link UP Mar 14 00:13:27.428053 systemd-networkd[1242]: eth1: Gained carrier Mar 14 00:13:27.428120 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.447457 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:13:27.447639 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Mar 14 00:13:27.447850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:27.452745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:27.462755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:27.467745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:27.468429 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:27.468468 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:27.470683 systemd-networkd[1242]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:13:27.471270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:27.472325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:27.477092 systemd-networkd[1242]: eth0: DHCPv4 address 159.69.119.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:13:27.480187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:27.480369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:27.487318 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.488021 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:27.488510 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:27.489282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:27.491726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:27.492277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:27.516589 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1246) Mar 14 00:13:27.540647 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 14 00:13:27.540742 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:13:27.540777 kernel: [drm] features: -context_init Mar 14 00:13:27.548610 kernel: [drm] number of scanouts: 1 Mar 14 00:13:27.548697 kernel: [drm] number of cap sets: 0 Mar 14 00:13:27.555991 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:13:27.562480 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:13:27.572599 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:13:27.590959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:27.603472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:13:27.661497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:27.734277 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:13:27.749241 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:13:27.765275 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:27.793261 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:13:27.794829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:27.800861 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:13:27.807246 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:27.836413 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:13:27.838401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:27.840199 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:27.840391 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:27.841681 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:27.843741 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:27.849850 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:27.854844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:27.856880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:27.859638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:13:27.864442 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:27.867816 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:27.870546 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:27.889973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:13:27.907624 kernel: loop0: detected capacity change from 0 to 114328 Mar 14 00:13:27.917640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:27.920166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:27.937886 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:27.961720 kernel: loop1: detected capacity change from 0 to 209336 Mar 14 00:13:27.995656 kernel: loop2: detected capacity change from 0 to 8 Mar 14 00:13:28.026667 kernel: loop3: detected capacity change from 0 to 114432 Mar 14 00:13:28.061718 kernel: loop4: detected capacity change from 0 to 114328 Mar 14 00:13:28.085622 kernel: loop5: detected capacity change from 0 to 209336 Mar 14 00:13:28.099609 kernel: loop6: detected capacity change from 0 to 8 Mar 14 00:13:28.101594 kernel: loop7: detected capacity change from 0 to 114432 Mar 14 00:13:28.112238 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:13:28.112707 (sd-merge)[1333]: Merged extensions into '/usr'. Mar 14 00:13:28.118895 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:28.118918 systemd[1]: Reloading... Mar 14 00:13:28.205611 zram_generator::config[1361]: No configuration found. Mar 14 00:13:28.332496 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:13:28.337211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:28.394679 systemd[1]: Reloading finished in 275 ms. Mar 14 00:13:28.411450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:13:28.414440 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:13:28.422906 systemd[1]: Starting ensure-sysext.service... Mar 14 00:13:28.427875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:28.433796 systemd[1]: Reloading requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:13:28.433836 systemd[1]: Reloading... Mar 14 00:13:28.464702 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:13:28.465008 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:13:28.465665 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:13:28.465904 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Mar 14 00:13:28.465952 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Mar 14 00:13:28.469354 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:28.469368 systemd-tmpfiles[1406]: Skipping /boot Mar 14 00:13:28.476507 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:28.476530 systemd-tmpfiles[1406]: Skipping /boot Mar 14 00:13:28.521105 zram_generator::config[1441]: No configuration found. Mar 14 00:13:28.617597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:28.678746 systemd[1]: Reloading finished in 244 ms. Mar 14 00:13:28.701047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:28.716815 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:28.725855 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:13:28.729364 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:13:28.733895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:28.743982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:13:28.756723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:28.758048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:28.765673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:28.776984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:28.781250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:28.788869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:28.789049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:28.790644 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:28.790826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:28.794017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:28.794220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:28.796322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:28.804928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:28.808973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:28.810732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:28.814771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:13:28.823130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:28.823306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:28.830786 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:28.834094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:28.839264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:13:28.847411 augenrules[1521]: No rules Mar 14 00:13:28.853482 systemd[1]: Finished ensure-sysext.service. Mar 14 00:13:28.855049 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:28.858687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:28.863884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:28.875820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:28.879969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:28.883688 systemd-networkd[1242]: eth0: Gained IPv6LL Mar 14 00:13:28.887630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:28.888747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:28.900679 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:13:28.920427 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:13:28.921853 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:13:28.924716 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:13:28.926064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:28.926303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:28.927341 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:28.927590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:28.928532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:28.928833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:28.929857 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:28.930056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:28.932941 systemd-resolved[1483]: Positive Trust Anchors: Mar 14 00:13:28.932960 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:28.932993 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:28.938570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:28.940949 systemd-resolved[1483]: Using system hostname 'ci-4081-3-6-n-0dd818c04e'. Mar 14 00:13:28.942724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:28.942769 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:28.944884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:28.946366 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:13:28.948113 systemd[1]: Reached target network.target - Network. Mar 14 00:13:28.948875 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:13:28.949683 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:28.990349 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:13:28.992366 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:28.994650 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:13:28.995519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:13:28.996369 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:13:28.997285 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:13:28.997317 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:28.997898 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:13:28.998623 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:13:28.999289 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:13:29.000031 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:29.001653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:13:29.004040 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:13:29.006009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:13:29.008986 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:13:29.009648 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:29.010268 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:29.011079 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:13:29.011122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:29.011143 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:29.015779 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:13:29.022891 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:13:29.024737 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:13:29.037766 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:13:29.042788 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:13:29.044825 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:13:29.050645 jq[1558]: false Mar 14 00:13:29.050837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:29.059866 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:13:29.067050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:13:29.076717 systemd-networkd[1242]: eth1: Gained IPv6LL Mar 14 00:13:29.077086 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:13:29.080923 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:13:29.082075 extend-filesystems[1561]: Found loop4 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found loop5 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found loop6 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found loop7 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda1 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda2 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda3 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found usr Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda4 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda6 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda7 Mar 14 00:13:29.083586 extend-filesystems[1561]: Found sda9 Mar 14 00:13:29.083586 extend-filesystems[1561]: Checking size of /dev/sda9 Mar 14 00:13:29.099177 coreos-metadata[1555]: Mar 14 00:13:29.087 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:13:29.099177 coreos-metadata[1555]: Mar 14 00:13:29.090 INFO Fetch successful Mar 14 00:13:29.099177 coreos-metadata[1555]: Mar 14 00:13:29.090 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:13:29.099177 coreos-metadata[1555]: Mar 14 00:13:29.091 INFO Fetch successful Mar 14 00:13:29.106918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:13:29.111740 systemd-timesyncd[1536]: Contacted time server 91.98.23.146:123 (0.flatcar.pool.ntp.org). Mar 14 00:13:29.111870 systemd-timesyncd[1536]: Initial clock synchronization to Sat 2026-03-14 00:13:28.720096 UTC. Mar 14 00:13:29.117891 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:13:29.124900 dbus-daemon[1556]: [system] SELinux support is enabled Mar 14 00:13:29.138952 extend-filesystems[1561]: Resized partition /dev/sda9 Mar 14 00:13:29.137801 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:13:29.139309 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:13:29.147130 extend-filesystems[1587]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:13:29.154825 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 14 00:13:29.143772 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:13:29.152502 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:13:29.156333 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:13:29.166699 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:13:29.167173 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:13:29.171877 jq[1591]: true Mar 14 00:13:29.176420 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:13:29.176707 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:13:29.197823 update_engine[1590]: I20260314 00:13:29.197199 1590 main.cc:92] Flatcar Update Engine starting Mar 14 00:13:29.198399 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:13:29.198955 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:13:29.202836 update_engine[1590]: I20260314 00:13:29.202790 1590 update_check_scheduler.cc:74] Next update check in 4m9s Mar 14 00:13:29.248197 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:13:29.258253 tar[1600]: linux-arm64/LICENSE Mar 14 00:13:29.260990 tar[1600]: linux-arm64/helm Mar 14 00:13:29.266241 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:13:29.275023 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:13:29.275070 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:13:29.276146 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:13:29.276494 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:13:29.276522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:13:29.280595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:13:29.283479 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:13:29.291515 jq[1604]: true Mar 14 00:13:29.398473 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1250) Mar 14 00:13:29.398627 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 14 00:13:29.430119 extend-filesystems[1587]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:13:29.430119 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 14 00:13:29.430119 extend-filesystems[1587]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 14 00:13:29.427284 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:13:29.447136 extend-filesystems[1561]: Resized filesystem in /dev/sda9 Mar 14 00:13:29.447136 extend-filesystems[1561]: Found sr0 Mar 14 00:13:29.427531 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:13:29.433342 systemd-logind[1583]: New seat seat0. Mar 14 00:13:29.441034 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:13:29.441051 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 14 00:13:29.445047 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:13:29.488083 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:13:29.489664 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:13:29.581833 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:29.587075 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:13:29.603051 systemd[1]: Starting sshkeys.service... Mar 14 00:13:29.623116 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:13:29.629944 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:13:29.681896 coreos-metadata[1660]: Mar 14 00:13:29.681 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:13:29.683290 coreos-metadata[1660]: Mar 14 00:13:29.683 INFO Fetch successful Mar 14 00:13:29.686474 unknown[1660]: wrote ssh authorized keys file for user: core Mar 14 00:13:29.727446 update-ssh-keys[1669]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:29.731022 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:13:29.742418 systemd[1]: Finished sshkeys.service. Mar 14 00:13:29.750924 containerd[1618]: time="2026-03-14T00:13:29.750825120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:13:29.840359 containerd[1618]: time="2026-03-14T00:13:29.838916800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.846368 containerd[1618]: time="2026-03-14T00:13:29.846316600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:29.846516 containerd[1618]: time="2026-03-14T00:13:29.846499960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:13:29.846681 containerd[1618]: time="2026-03-14T00:13:29.846664520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:13:29.846999 containerd[1618]: time="2026-03-14T00:13:29.846976840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:13:29.847332 containerd[1618]: time="2026-03-14T00:13:29.847314440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.847507 containerd[1618]: time="2026-03-14T00:13:29.847486440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:29.847905 containerd[1618]: time="2026-03-14T00:13:29.847874800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.848239 containerd[1618]: time="2026-03-14T00:13:29.848211400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:29.849193 containerd[1618]: time="2026-03-14T00:13:29.849170760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.849292 containerd[1618]: time="2026-03-14T00:13:29.849275800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:29.849345 containerd[1618]: time="2026-03-14T00:13:29.849333280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.849494 containerd[1618]: time="2026-03-14T00:13:29.849476440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.849873 containerd[1618]: time="2026-03-14T00:13:29.849847480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:29.851382 containerd[1618]: time="2026-03-14T00:13:29.851354960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:29.852069 containerd[1618]: time="2026-03-14T00:13:29.851772440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:13:29.852069 containerd[1618]: time="2026-03-14T00:13:29.851882600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:13:29.852069 containerd[1618]: time="2026-03-14T00:13:29.851926600Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:13:29.859402 containerd[1618]: time="2026-03-14T00:13:29.859132000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:13:29.859402 containerd[1618]: time="2026-03-14T00:13:29.859199520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:13:29.859402 containerd[1618]: time="2026-03-14T00:13:29.859225680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:13:29.859402 containerd[1618]: time="2026-03-14T00:13:29.859241400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:13:29.859402 containerd[1618]: time="2026-03-14T00:13:29.859255240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:13:29.859722 containerd[1618]: time="2026-03-14T00:13:29.859697600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860193400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860325960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860344400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860358400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860371840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860386320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860399680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860412960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860428080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860440520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860453320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860465160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860486560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862549 containerd[1618]: time="2026-03-14T00:13:29.860501000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860514480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860527920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860539480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860554760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860566080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860610120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860625080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860647960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860661000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860675480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860688440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860703800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860725760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860738280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.862922 containerd[1618]: time="2026-03-14T00:13:29.860799520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860928320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860946920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860958080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860970840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860980560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.860994440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.861004800Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:13:29.863187 containerd[1618]: time="2026-03-14T00:13:29.861016040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:13:29.863351 containerd[1618]: time="2026-03-14T00:13:29.861385320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:13:29.863351 containerd[1618]: time="2026-03-14T00:13:29.861447680Z" level=info msg="Connect containerd service" Mar 14 00:13:29.863351 containerd[1618]: time="2026-03-14T00:13:29.861544760Z" level=info msg="using legacy CRI server" Mar 14 00:13:29.863351 containerd[1618]: time="2026-03-14T00:13:29.861551840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:13:29.864887 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:13:29.865676 containerd[1618]: time="2026-03-14T00:13:29.865644880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:13:29.867753 containerd[1618]: time="2026-03-14T00:13:29.867703720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:13:29.869183 containerd[1618]: time="2026-03-14T00:13:29.868451120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:13:29.869688 containerd[1618]: time="2026-03-14T00:13:29.869666360Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:13:29.869978 containerd[1618]: time="2026-03-14T00:13:29.869917720Z" level=info msg="Start subscribing containerd event" Mar 14 00:13:29.870018 containerd[1618]: time="2026-03-14T00:13:29.869988120Z" level=info msg="Start recovering state" Mar 14 00:13:29.873315 containerd[1618]: time="2026-03-14T00:13:29.870856840Z" level=info msg="Start event monitor" Mar 14 00:13:29.873315 containerd[1618]: time="2026-03-14T00:13:29.870885680Z" level=info msg="Start snapshots syncer" Mar 14 00:13:29.873315 containerd[1618]: time="2026-03-14T00:13:29.870899240Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:13:29.873315 containerd[1618]: time="2026-03-14T00:13:29.870907560Z" level=info msg="Start streaming server" Mar 14 00:13:29.873315 containerd[1618]: time="2026-03-14T00:13:29.871089120Z" level=info msg="containerd successfully booted in 0.129909s" Mar 14 00:13:29.871206 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:13:30.156643 tar[1600]: linux-arm64/README.md Mar 14 00:13:30.176259 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:13:30.324885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:30.327063 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:30.796905 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:13:30.825545 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:13:30.836676 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:13:30.844385 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:13:30.844651 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:13:30.854124 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:13:30.863443 kubelet[1693]: E0314 00:13:30.863385 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:30.868989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:30.869270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:30.871292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:13:30.882193 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:13:30.885129 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 14 00:13:30.886371 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:13:30.887461 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:13:30.889899 systemd[1]: Startup finished in 5.868s (kernel) + 5.193s (userspace) = 11.061s. Mar 14 00:13:41.119956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:41.128955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:41.257831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:41.273275 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:41.323821 kubelet[1738]: E0314 00:13:41.323734 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:41.329873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:41.330114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:51.581076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:13:51.596910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:51.733794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:51.738558 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:51.779403 kubelet[1758]: E0314 00:13:51.779286 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:51.781902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:51.782230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:01.997954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:14:02.004838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:02.145845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:02.162185 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:02.206670 kubelet[1778]: E0314 00:14:02.206559 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:02.212985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:02.213235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:11.225131 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:11.232095 systemd[1]: Started sshd@0-159.69.119.127:22-68.220.241.50:60436.service - OpenSSH per-connection server daemon (68.220.241.50:60436). Mar 14 00:14:11.830608 sshd[1786]: Accepted publickey for core from 68.220.241.50 port 60436 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:11.832999 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:11.846759 systemd-logind[1583]: New session 1 of user core. Mar 14 00:14:11.847272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:11.855021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:11.869438 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:11.877290 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:11.890228 (systemd)[1792]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:11.997107 systemd[1792]: Queued start job for default target default.target. Mar 14 00:14:11.997825 systemd[1792]: Created slice app.slice - User Application Slice. Mar 14 00:14:11.997858 systemd[1792]: Reached target paths.target - Paths. Mar 14 00:14:11.997871 systemd[1792]: Reached target timers.target - Timers. Mar 14 00:14:12.004820 systemd[1792]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:12.014001 systemd[1792]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:12.014068 systemd[1792]: Reached target sockets.target - Sockets. Mar 14 00:14:12.014083 systemd[1792]: Reached target basic.target - Basic System. Mar 14 00:14:12.014128 systemd[1792]: Reached target default.target - Main User Target. Mar 14 00:14:12.014156 systemd[1792]: Startup finished in 117ms. Mar 14 00:14:12.014274 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:12.020914 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:12.247686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:14:12.258189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:12.404793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:12.410997 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:12.450872 systemd[1]: Started sshd@1-159.69.119.127:22-68.220.241.50:49664.service - OpenSSH per-connection server daemon (68.220.241.50:49664). Mar 14 00:14:12.468278 kubelet[1816]: E0314 00:14:12.468232 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:12.473142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:12.473338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:13.054391 sshd[1823]: Accepted publickey for core from 68.220.241.50 port 49664 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:13.055885 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:13.062434 systemd-logind[1583]: New session 2 of user core. Mar 14 00:14:13.068181 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:13.472925 sshd[1823]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:13.478134 systemd[1]: sshd@1-159.69.119.127:22-68.220.241.50:49664.service: Deactivated successfully. Mar 14 00:14:13.482318 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:13.483379 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:13.484494 systemd-logind[1583]: Removed session 2. Mar 14 00:14:13.582073 systemd[1]: Started sshd@2-159.69.119.127:22-68.220.241.50:49680.service - OpenSSH per-connection server daemon (68.220.241.50:49680). Mar 14 00:14:14.171623 sshd[1833]: Accepted publickey for core from 68.220.241.50 port 49680 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:14.172811 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:14.178181 systemd-logind[1583]: New session 3 of user core. Mar 14 00:14:14.185817 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:14.585936 sshd[1833]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:14.592282 systemd[1]: sshd@2-159.69.119.127:22-68.220.241.50:49680.service: Deactivated successfully. Mar 14 00:14:14.596263 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:14.597173 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:14.598158 systemd-logind[1583]: Removed session 3. Mar 14 00:14:14.684988 systemd[1]: Started sshd@3-159.69.119.127:22-68.220.241.50:49692.service - OpenSSH per-connection server daemon (68.220.241.50:49692). Mar 14 00:14:14.829712 update_engine[1590]: I20260314 00:14:14.828940 1590 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:14.898633 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1852) Mar 14 00:14:14.956427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1851) Mar 14 00:14:15.278704 sshd[1841]: Accepted publickey for core from 68.220.241.50 port 49692 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:15.280771 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:15.286755 systemd-logind[1583]: New session 4 of user core. Mar 14 00:14:15.293173 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:15.698336 sshd[1841]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:15.702968 systemd[1]: sshd@3-159.69.119.127:22-68.220.241.50:49692.service: Deactivated successfully. Mar 14 00:14:15.706803 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:15.707358 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:15.708491 systemd-logind[1583]: Removed session 4. Mar 14 00:14:15.800041 systemd[1]: Started sshd@4-159.69.119.127:22-68.220.241.50:49704.service - OpenSSH per-connection server daemon (68.220.241.50:49704). Mar 14 00:14:16.389109 sshd[1867]: Accepted publickey for core from 68.220.241.50 port 49704 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:16.392127 sshd[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:16.398127 systemd-logind[1583]: New session 5 of user core. Mar 14 00:14:16.405203 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:16.726622 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:16.726949 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:16.744157 sudo[1871]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:16.840100 sshd[1867]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:16.847506 systemd[1]: sshd@4-159.69.119.127:22-68.220.241.50:49704.service: Deactivated successfully. Mar 14 00:14:16.851028 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:16.852319 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:16.853710 systemd-logind[1583]: Removed session 5. Mar 14 00:14:16.941004 systemd[1]: Started sshd@5-159.69.119.127:22-68.220.241.50:49706.service - OpenSSH per-connection server daemon (68.220.241.50:49706). Mar 14 00:14:17.527483 sshd[1876]: Accepted publickey for core from 68.220.241.50 port 49706 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:17.529857 sshd[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:17.536204 systemd-logind[1583]: New session 6 of user core. Mar 14 00:14:17.546087 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:17.856258 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:17.856983 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:17.860891 sudo[1881]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:17.867841 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:17.868342 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:17.884962 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:17.896767 auditctl[1884]: No rules Mar 14 00:14:17.897536 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:17.897949 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:17.904929 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:17.934819 augenrules[1903]: No rules Mar 14 00:14:17.937940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:17.939475 sudo[1880]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:18.035920 sshd[1876]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:18.040441 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:18.041200 systemd[1]: sshd@5-159.69.119.127:22-68.220.241.50:49706.service: Deactivated successfully. Mar 14 00:14:18.044688 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:18.045833 systemd-logind[1583]: Removed session 6. Mar 14 00:14:18.139422 systemd[1]: Started sshd@6-159.69.119.127:22-68.220.241.50:49718.service - OpenSSH per-connection server daemon (68.220.241.50:49718). Mar 14 00:14:18.724762 sshd[1912]: Accepted publickey for core from 68.220.241.50 port 49718 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:14:18.727626 sshd[1912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:18.733139 systemd-logind[1583]: New session 7 of user core. Mar 14 00:14:18.739119 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:19.048302 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:19.048623 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:19.345995 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:19.358406 (dockerd)[1931]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:19.617595 dockerd[1931]: time="2026-03-14T00:14:19.616689196Z" level=info msg="Starting up" Mar 14 00:14:19.725445 dockerd[1931]: time="2026-03-14T00:14:19.725394833Z" level=info msg="Loading containers: start." Mar 14 00:14:19.833792 kernel: Initializing XFRM netlink socket Mar 14 00:14:19.914801 systemd-networkd[1242]: docker0: Link UP Mar 14 00:14:19.931024 dockerd[1931]: time="2026-03-14T00:14:19.930916814Z" level=info msg="Loading containers: done." Mar 14 00:14:19.952274 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2448038063-merged.mount: Deactivated successfully. Mar 14 00:14:19.956243 dockerd[1931]: time="2026-03-14T00:14:19.955821949Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:19.956243 dockerd[1931]: time="2026-03-14T00:14:19.955934328Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:19.956243 dockerd[1931]: time="2026-03-14T00:14:19.956045787Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:19.999625 dockerd[1931]: time="2026-03-14T00:14:19.998881476Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:19.999882 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:20.513387 containerd[1618]: time="2026-03-14T00:14:20.513270063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:14:21.085025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185804046.mount: Deactivated successfully. Mar 14 00:14:22.012239 containerd[1618]: time="2026-03-14T00:14:22.011892225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.013887 containerd[1618]: time="2026-03-14T00:14:22.013847594Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 14 00:14:22.015636 containerd[1618]: time="2026-03-14T00:14:22.014628429Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.018233 containerd[1618]: time="2026-03-14T00:14:22.018162550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.021635 containerd[1618]: time="2026-03-14T00:14:22.020915237Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.507573962s" Mar 14 00:14:22.021635 containerd[1618]: time="2026-03-14T00:14:22.020993568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 14 00:14:22.021890 containerd[1618]: time="2026-03-14T00:14:22.021826051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:14:22.496708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:14:22.506008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:22.673843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:22.685314 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:22.728068 kubelet[2138]: E0314 00:14:22.727951 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:22.730628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:22.730881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:23.094436 containerd[1618]: time="2026-03-14T00:14:23.094350648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.096872 containerd[1618]: time="2026-03-14T00:14:23.096795433Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 14 00:14:23.098422 containerd[1618]: time="2026-03-14T00:14:23.097647753Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.101317 containerd[1618]: time="2026-03-14T00:14:23.101271745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.102683 containerd[1618]: time="2026-03-14T00:14:23.102607014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.080727195s" Mar 14 00:14:23.102683 containerd[1618]: time="2026-03-14T00:14:23.102678624Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 14 00:14:23.103154 containerd[1618]: time="2026-03-14T00:14:23.103119886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:14:24.047457 containerd[1618]: time="2026-03-14T00:14:24.047384046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.047960 containerd[1618]: time="2026-03-14T00:14:24.047918078Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 14 00:14:24.049640 containerd[1618]: time="2026-03-14T00:14:24.049603146Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.053358 containerd[1618]: time="2026-03-14T00:14:24.053285524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.054803 containerd[1618]: time="2026-03-14T00:14:24.054768565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 950.753632ms" Mar 14 00:14:24.054908 containerd[1618]: time="2026-03-14T00:14:24.054892821Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 14 00:14:24.055881 containerd[1618]: time="2026-03-14T00:14:24.055737896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:14:24.968258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910228669.mount: Deactivated successfully. Mar 14 00:14:25.340393 containerd[1618]: time="2026-03-14T00:14:25.340322620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.342207 containerd[1618]: time="2026-03-14T00:14:25.342157098Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 14 00:14:25.342896 containerd[1618]: time="2026-03-14T00:14:25.342865510Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.350186 containerd[1618]: time="2026-03-14T00:14:25.349290303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.351047 containerd[1618]: time="2026-03-14T00:14:25.351014807Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.295246507s" Mar 14 00:14:25.351133 containerd[1618]: time="2026-03-14T00:14:25.351051612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 14 00:14:25.351684 containerd[1618]: time="2026-03-14T00:14:25.351655970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:14:25.839791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326413539.mount: Deactivated successfully. Mar 14 00:14:26.630983 containerd[1618]: time="2026-03-14T00:14:26.630921006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.632691 containerd[1618]: time="2026-03-14T00:14:26.632541608Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 14 00:14:26.634609 containerd[1618]: time="2026-03-14T00:14:26.633774961Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.640066 containerd[1618]: time="2026-03-14T00:14:26.639992735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.642598 containerd[1618]: time="2026-03-14T00:14:26.642119440Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.290427665s" Mar 14 00:14:26.642598 containerd[1618]: time="2026-03-14T00:14:26.642169246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 14 00:14:26.643776 containerd[1618]: time="2026-03-14T00:14:26.643196134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:14:27.099507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223540580.mount: Deactivated successfully. Mar 14 00:14:27.107623 containerd[1618]: time="2026-03-14T00:14:27.107510329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.109255 containerd[1618]: time="2026-03-14T00:14:27.109186730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 14 00:14:27.110611 containerd[1618]: time="2026-03-14T00:14:27.110559614Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.113422 containerd[1618]: time="2026-03-14T00:14:27.113365589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.115107 containerd[1618]: time="2026-03-14T00:14:27.114978142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 471.724081ms" Mar 14 00:14:27.115107 containerd[1618]: time="2026-03-14T00:14:27.115034189Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 14 00:14:27.116038 containerd[1618]: time="2026-03-14T00:14:27.115924695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:14:27.635563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152747357.mount: Deactivated successfully. Mar 14 00:14:28.451617 containerd[1618]: time="2026-03-14T00:14:28.450302641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.452363 containerd[1618]: time="2026-03-14T00:14:28.452300271Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 14 00:14:28.453066 containerd[1618]: time="2026-03-14T00:14:28.452857175Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.457571 containerd[1618]: time="2026-03-14T00:14:28.457503389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.459532 containerd[1618]: time="2026-03-14T00:14:28.459345001Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.34337314s" Mar 14 00:14:28.459532 containerd[1618]: time="2026-03-14T00:14:28.459384606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 14 00:14:32.557750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:32.565150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:32.605632 systemd[1]: Reloading requested from client PID 2311 ('systemctl') (unit session-7.scope)... Mar 14 00:14:32.605650 systemd[1]: Reloading... Mar 14 00:14:32.719615 zram_generator::config[2353]: No configuration found. Mar 14 00:14:32.830847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:32.903691 systemd[1]: Reloading finished in 297 ms. Mar 14 00:14:32.956184 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:14:32.956260 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:14:32.956635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:32.965601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:33.110791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:33.123409 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:33.170626 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:33.170626 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:33.170626 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:33.170626 kubelet[2411]: I0314 00:14:33.169221 2411 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:34.438607 kubelet[2411]: I0314 00:14:34.437298 2411 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:34.438607 kubelet[2411]: I0314 00:14:34.437336 2411 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:34.438607 kubelet[2411]: I0314 00:14:34.437701 2411 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:34.473372 kubelet[2411]: E0314 00:14:34.473302 2411 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://159.69.119.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:34.474879 kubelet[2411]: I0314 00:14:34.474843 2411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:34.484392 kubelet[2411]: E0314 00:14:34.484312 2411 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:34.484392 kubelet[2411]: I0314 00:14:34.484386 2411 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:34.488633 kubelet[2411]: I0314 00:14:34.488388 2411 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:34.490751 kubelet[2411]: I0314 00:14:34.490686 2411 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:34.490961 kubelet[2411]: I0314 00:14:34.490733 2411 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-0dd818c04e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:14:34.490961 kubelet[2411]: I0314 00:14:34.490952 2411 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:34.490961 kubelet[2411]: I0314 00:14:34.490961 2411 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:34.491213 kubelet[2411]: I0314 00:14:34.491180 2411 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:34.495081 kubelet[2411]: I0314 00:14:34.494929 2411 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:34.495081 kubelet[2411]: I0314 00:14:34.494955 2411 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:34.495081 kubelet[2411]: I0314 00:14:34.495010 2411 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:34.496711 kubelet[2411]: I0314 00:14:34.496343 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:34.501961 kubelet[2411]: I0314 00:14:34.501937 2411 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:34.502869 kubelet[2411]: I0314 00:14:34.502851 2411 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:34.503117 kubelet[2411]: W0314 00:14:34.503103 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:14:34.506632 kubelet[2411]: E0314 00:14:34.506264 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.69.119.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-0dd818c04e&limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:34.508316 kubelet[2411]: I0314 00:14:34.508300 2411 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:34.508439 kubelet[2411]: I0314 00:14:34.508429 2411 server.go:1289] "Started kubelet" Mar 14 00:14:34.509262 kubelet[2411]: E0314 00:14:34.509227 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.69.119.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:34.509346 kubelet[2411]: I0314 00:14:34.509284 2411 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:34.512601 kubelet[2411]: I0314 00:14:34.512240 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:34.512753 kubelet[2411]: I0314 00:14:34.512739 2411 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:34.513043 kubelet[2411]: I0314 00:14:34.513016 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:34.517200 kubelet[2411]: I0314 00:14:34.517179 2411 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:34.519331 kubelet[2411]: E0314 00:14:34.517961 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.119.127:6443/api/v1/namespaces/default/events\": dial tcp 159.69.119.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-0dd818c04e.189c8ce668f53201 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-0dd818c04e,UID:ci-4081-3-6-n-0dd818c04e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-0dd818c04e,},FirstTimestamp:2026-03-14 00:14:34.508399105 +0000 UTC m=+1.378509229,LastTimestamp:2026-03-14 00:14:34.508399105 +0000 UTC m=+1.378509229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-0dd818c04e,}" Mar 14 00:14:34.520923 kubelet[2411]: I0314 00:14:34.520315 2411 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:34.521056 kubelet[2411]: E0314 00:14:34.520594 2411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" Mar 14 00:14:34.521411 kubelet[2411]: I0314 00:14:34.521392 2411 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:34.521544 kubelet[2411]: I0314 00:14:34.521534 2411 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:34.522051 kubelet[2411]: I0314 00:14:34.522025 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:34.524427 kubelet[2411]: E0314 00:14:34.523059 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.69.119.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:34.524427 kubelet[2411]: E0314 00:14:34.523144 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.119.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0dd818c04e?timeout=10s\": dial tcp 159.69.119.127:6443: connect: connection refused" interval="200ms" Mar 14 00:14:34.526083 kubelet[2411]: I0314 00:14:34.526058 2411 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:34.526270 kubelet[2411]: I0314 00:14:34.526251 2411 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:34.528354 kubelet[2411]: I0314 00:14:34.528332 2411 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:34.536659 kubelet[2411]: I0314 00:14:34.536619 2411 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:34.537747 kubelet[2411]: I0314 00:14:34.537729 2411 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:34.537849 kubelet[2411]: I0314 00:14:34.537840 2411 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:34.537917 kubelet[2411]: I0314 00:14:34.537908 2411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:34.537965 kubelet[2411]: I0314 00:14:34.537958 2411 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:34.538116 kubelet[2411]: E0314 00:14:34.538096 2411 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:34.540325 kubelet[2411]: E0314 00:14:34.540304 2411 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:34.542249 kubelet[2411]: E0314 00:14:34.542222 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.69.119.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:34.552943 kubelet[2411]: I0314 00:14:34.552922 2411 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:34.553139 kubelet[2411]: I0314 00:14:34.553128 2411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:34.553208 kubelet[2411]: I0314 00:14:34.553201 2411 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:34.555317 kubelet[2411]: I0314 00:14:34.555296 2411 policy_none.go:49] "None policy: Start" Mar 14 00:14:34.555435 kubelet[2411]: I0314 00:14:34.555422 2411 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:34.555490 kubelet[2411]: I0314 00:14:34.555483 2411 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:34.562804 kubelet[2411]: E0314 00:14:34.562771 2411 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:34.563168 kubelet[2411]: I0314 00:14:34.563150 2411 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:34.563606 kubelet[2411]: I0314 00:14:34.563257 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:34.564914 kubelet[2411]: I0314 00:14:34.564893 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:34.565763 kubelet[2411]: E0314 00:14:34.565744 2411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:34.565892 kubelet[2411]: E0314 00:14:34.565878 2411 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-0dd818c04e\" not found" Mar 14 00:14:34.653606 kubelet[2411]: E0314 00:14:34.653545 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.656916 kubelet[2411]: E0314 00:14:34.656893 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.662252 kubelet[2411]: E0314 00:14:34.662228 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.666383 kubelet[2411]: I0314 00:14:34.666355 2411 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.666918 kubelet[2411]: E0314 00:14:34.666863 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.119.127:6443/api/v1/nodes\": dial tcp 159.69.119.127:6443: connect: connection refused" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723110 kubelet[2411]: I0314 00:14:34.722847 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723110 kubelet[2411]: I0314 00:14:34.722941 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723110 kubelet[2411]: I0314 00:14:34.723046 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8fb36db7642ca8518e1dd1706010d6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-0dd818c04e\" (UID: \"fd8fb36db7642ca8518e1dd1706010d6\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723460 kubelet[2411]: I0314 00:14:34.723128 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723460 kubelet[2411]: I0314 00:14:34.723189 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723460 kubelet[2411]: I0314 00:14:34.723235 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723460 kubelet[2411]: I0314 00:14:34.723288 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723460 kubelet[2411]: I0314 00:14:34.723339 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.723880 kubelet[2411]: I0314 00:14:34.723403 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.726212 kubelet[2411]: E0314 00:14:34.726105 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.119.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0dd818c04e?timeout=10s\": dial tcp 159.69.119.127:6443: connect: connection refused" interval="400ms" Mar 14 00:14:34.870400 kubelet[2411]: I0314 00:14:34.869907 2411 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.870400 kubelet[2411]: E0314 00:14:34.870356 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.119.127:6443/api/v1/nodes\": dial tcp 159.69.119.127:6443: connect: connection refused" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:34.957509 containerd[1618]: time="2026-03-14T00:14:34.957436931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-0dd818c04e,Uid:d2fe036f065d721bd062b61321e65680,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:34.960903 containerd[1618]: time="2026-03-14T00:14:34.960769360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-0dd818c04e,Uid:fd8fb36db7642ca8518e1dd1706010d6,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:34.963707 containerd[1618]: time="2026-03-14T00:14:34.963676910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-0dd818c04e,Uid:a2353993256d95befefbbc352f9e7ce0,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:35.127252 kubelet[2411]: E0314 00:14:35.127180 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.119.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0dd818c04e?timeout=10s\": dial tcp 159.69.119.127:6443: connect: connection refused" interval="800ms" Mar 14 00:14:35.273370 kubelet[2411]: I0314 00:14:35.273023 2411 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:35.273658 kubelet[2411]: E0314 00:14:35.273632 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.119.127:6443/api/v1/nodes\": dial tcp 159.69.119.127:6443: connect: connection refused" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:35.322567 kubelet[2411]: E0314 00:14:35.322485 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.69.119.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-0dd818c04e&limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:35.433290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210857179.mount: Deactivated successfully. Mar 14 00:14:35.439913 containerd[1618]: time="2026-03-14T00:14:35.439839892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:35.441836 containerd[1618]: time="2026-03-14T00:14:35.441780106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 14 00:14:35.444377 containerd[1618]: time="2026-03-14T00:14:35.444308534Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:35.446333 containerd[1618]: time="2026-03-14T00:14:35.446246748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:35.447634 containerd[1618]: time="2026-03-14T00:14:35.447526023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:35.448690 containerd[1618]: time="2026-03-14T00:14:35.448635083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:35.449320 containerd[1618]: time="2026-03-14T00:14:35.449261099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:35.450273 containerd[1618]: time="2026-03-14T00:14:35.450214985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:35.452918 containerd[1618]: time="2026-03-14T00:14:35.452854702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.019417ms" Mar 14 00:14:35.455199 containerd[1618]: time="2026-03-14T00:14:35.455032298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.294702ms" Mar 14 00:14:35.465794 kubelet[2411]: E0314 00:14:35.465739 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.69.119.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:35.467145 containerd[1618]: time="2026-03-14T00:14:35.467035018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.469275ms" Mar 14 00:14:35.569091 containerd[1618]: time="2026-03-14T00:14:35.568800572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:35.569091 containerd[1618]: time="2026-03-14T00:14:35.568890660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:35.569091 containerd[1618]: time="2026-03-14T00:14:35.568911142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.569357 containerd[1618]: time="2026-03-14T00:14:35.569021672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.570640 containerd[1618]: time="2026-03-14T00:14:35.570533648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:35.570852 containerd[1618]: time="2026-03-14T00:14:35.570624696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:35.570852 containerd[1618]: time="2026-03-14T00:14:35.570826995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.572841 containerd[1618]: time="2026-03-14T00:14:35.572667520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:35.572841 containerd[1618]: time="2026-03-14T00:14:35.572717485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:35.573703 containerd[1618]: time="2026-03-14T00:14:35.573501995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.578446 containerd[1618]: time="2026-03-14T00:14:35.578367673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.578825 containerd[1618]: time="2026-03-14T00:14:35.578685301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:35.656141 containerd[1618]: time="2026-03-14T00:14:35.656097025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-0dd818c04e,Uid:a2353993256d95befefbbc352f9e7ce0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee562b903783af3871bf71ac8fff11e1be61355fa277566c402ca7fdd382c20\"" Mar 14 00:14:35.661049 containerd[1618]: time="2026-03-14T00:14:35.660777526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-0dd818c04e,Uid:d2fe036f065d721bd062b61321e65680,Namespace:kube-system,Attempt:0,} returns sandbox id \"38557b2dd0fd671a8d2faf99f8f2d8a6d844f55a483520e07ffbd60c980be0e3\"" Mar 14 00:14:35.666634 containerd[1618]: time="2026-03-14T00:14:35.666568207Z" level=info msg="CreateContainer within sandbox \"5ee562b903783af3871bf71ac8fff11e1be61355fa277566c402ca7fdd382c20\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:14:35.667247 containerd[1618]: time="2026-03-14T00:14:35.667149019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-0dd818c04e,Uid:fd8fb36db7642ca8518e1dd1706010d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"95cbdb21319314e2948b819ae8b054044c043028ac616d7b7743a3885ec44123\"" Mar 14 00:14:35.668510 containerd[1618]: time="2026-03-14T00:14:35.668464858Z" level=info msg="CreateContainer within sandbox \"38557b2dd0fd671a8d2faf99f8f2d8a6d844f55a483520e07ffbd60c980be0e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:14:35.672265 containerd[1618]: time="2026-03-14T00:14:35.672207634Z" level=info msg="CreateContainer within sandbox \"95cbdb21319314e2948b819ae8b054044c043028ac616d7b7743a3885ec44123\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:14:35.685919 containerd[1618]: time="2026-03-14T00:14:35.685636202Z" level=info msg="CreateContainer within sandbox \"5ee562b903783af3871bf71ac8fff11e1be61355fa277566c402ca7fdd382c20\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e4a48b29611ef7088a29abcfe0d228a40b7970bcb6f10a67448c8fce6a35cb3\"" Mar 14 00:14:35.687047 containerd[1618]: time="2026-03-14T00:14:35.687013206Z" level=info msg="StartContainer for \"7e4a48b29611ef7088a29abcfe0d228a40b7970bcb6f10a67448c8fce6a35cb3\"" Mar 14 00:14:35.692368 containerd[1618]: time="2026-03-14T00:14:35.692122866Z" level=info msg="CreateContainer within sandbox \"38557b2dd0fd671a8d2faf99f8f2d8a6d844f55a483520e07ffbd60c980be0e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b\"" Mar 14 00:14:35.692806 containerd[1618]: time="2026-03-14T00:14:35.692746482Z" level=info msg="StartContainer for \"2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b\"" Mar 14 00:14:35.696608 containerd[1618]: time="2026-03-14T00:14:35.696138307Z" level=info msg="CreateContainer within sandbox \"95cbdb21319314e2948b819ae8b054044c043028ac616d7b7743a3885ec44123\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8\"" Mar 14 00:14:35.697459 containerd[1618]: time="2026-03-14T00:14:35.697421782Z" level=info msg="StartContainer for \"3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8\"" Mar 14 00:14:35.817176 containerd[1618]: time="2026-03-14T00:14:35.817130671Z" level=info msg="StartContainer for \"7e4a48b29611ef7088a29abcfe0d228a40b7970bcb6f10a67448c8fce6a35cb3\" returns successfully" Mar 14 00:14:35.817457 containerd[1618]: time="2026-03-14T00:14:35.817161354Z" level=info msg="StartContainer for \"2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b\" returns successfully" Mar 14 00:14:35.818618 containerd[1618]: time="2026-03-14T00:14:35.817164794Z" level=info msg="StartContainer for \"3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8\" returns successfully" Mar 14 00:14:35.825940 kubelet[2411]: E0314 00:14:35.825867 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.69.119.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.119.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:35.928252 kubelet[2411]: E0314 00:14:35.928185 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.119.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0dd818c04e?timeout=10s\": dial tcp 159.69.119.127:6443: connect: connection refused" interval="1.6s" Mar 14 00:14:36.076798 kubelet[2411]: I0314 00:14:36.076766 2411 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:36.562362 kubelet[2411]: E0314 00:14:36.562331 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:36.566885 kubelet[2411]: E0314 00:14:36.566694 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:36.571201 kubelet[2411]: E0314 00:14:36.571172 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:37.573528 kubelet[2411]: E0314 00:14:37.572991 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:37.573528 kubelet[2411]: E0314 00:14:37.573368 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.407558 kubelet[2411]: E0314 00:14:38.407509 2411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-0dd818c04e\" not found" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.437870 kubelet[2411]: I0314 00:14:38.437814 2411 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.511865 kubelet[2411]: I0314 00:14:38.511628 2411 apiserver.go:52] "Watching apiserver" Mar 14 00:14:38.524588 kubelet[2411]: I0314 00:14:38.522152 2411 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:38.524588 kubelet[2411]: I0314 00:14:38.522171 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.538373 kubelet[2411]: E0314 00:14:38.538329 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.538373 kubelet[2411]: I0314 00:14:38.538366 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.540841 kubelet[2411]: E0314 00:14:38.540777 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-0dd818c04e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.540841 kubelet[2411]: I0314 00:14:38.540810 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.544635 kubelet[2411]: E0314 00:14:38.544497 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.799718 kubelet[2411]: I0314 00:14:38.799651 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:38.802656 kubelet[2411]: E0314 00:14:38.802432 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:40.578890 systemd[1]: Reloading requested from client PID 2696 ('systemctl') (unit session-7.scope)... Mar 14 00:14:40.578916 systemd[1]: Reloading... Mar 14 00:14:40.683636 zram_generator::config[2748]: No configuration found. Mar 14 00:14:40.792040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:40.874043 systemd[1]: Reloading finished in 294 ms. Mar 14 00:14:40.912878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:40.931266 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:14:40.932052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:40.943744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:41.074838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:41.090760 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:41.149679 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:41.149679 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:41.149679 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:41.149679 kubelet[2791]: I0314 00:14:41.148864 2791 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:41.156898 kubelet[2791]: I0314 00:14:41.156799 2791 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:41.156898 kubelet[2791]: I0314 00:14:41.156885 2791 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:41.157195 kubelet[2791]: I0314 00:14:41.157161 2791 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:41.159104 kubelet[2791]: I0314 00:14:41.159068 2791 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:14:41.162221 kubelet[2791]: I0314 00:14:41.161828 2791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:41.165955 kubelet[2791]: E0314 00:14:41.165791 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:41.165955 kubelet[2791]: I0314 00:14:41.165862 2791 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:41.168647 kubelet[2791]: I0314 00:14:41.168615 2791 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:41.169107 kubelet[2791]: I0314 00:14:41.169079 2791 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:41.169280 kubelet[2791]: I0314 00:14:41.169109 2791 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-0dd818c04e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:14:41.169359 kubelet[2791]: I0314 00:14:41.169286 2791 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:41.169359 kubelet[2791]: I0314 00:14:41.169294 2791 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:41.169359 kubelet[2791]: I0314 00:14:41.169342 2791 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:41.169547 kubelet[2791]: I0314 00:14:41.169534 2791 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:41.170204 kubelet[2791]: I0314 00:14:41.169556 2791 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:41.170204 kubelet[2791]: I0314 00:14:41.169621 2791 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:41.170204 kubelet[2791]: I0314 00:14:41.169638 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:41.192140 kubelet[2791]: I0314 00:14:41.190490 2791 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:41.194608 kubelet[2791]: I0314 00:14:41.193005 2791 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:41.197860 kubelet[2791]: I0314 00:14:41.197826 2791 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:41.198323 kubelet[2791]: I0314 00:14:41.198100 2791 server.go:1289] "Started kubelet" Mar 14 00:14:41.204688 kubelet[2791]: I0314 00:14:41.204586 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:41.204911 kubelet[2791]: I0314 00:14:41.204893 2791 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:41.204976 kubelet[2791]: I0314 00:14:41.204946 2791 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:41.206237 kubelet[2791]: I0314 00:14:41.206206 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:41.207184 kubelet[2791]: I0314 00:14:41.206559 2791 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:41.210691 kubelet[2791]: I0314 00:14:41.210403 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:41.214202 kubelet[2791]: E0314 00:14:41.214161 2791 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:41.216622 kubelet[2791]: I0314 00:14:41.214654 2791 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:41.216622 kubelet[2791]: I0314 00:14:41.215107 2791 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:41.216622 kubelet[2791]: I0314 00:14:41.215236 2791 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:41.216622 kubelet[2791]: I0314 00:14:41.216040 2791 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:41.216622 kubelet[2791]: I0314 00:14:41.216165 2791 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:41.219012 kubelet[2791]: I0314 00:14:41.218980 2791 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:41.230356 kubelet[2791]: I0314 00:14:41.230213 2791 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:41.234137 kubelet[2791]: I0314 00:14:41.233789 2791 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:41.234137 kubelet[2791]: I0314 00:14:41.233822 2791 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:41.234137 kubelet[2791]: I0314 00:14:41.233842 2791 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:41.234137 kubelet[2791]: I0314 00:14:41.233848 2791 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:41.234137 kubelet[2791]: E0314 00:14:41.233885 2791 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:41.279298 kubelet[2791]: I0314 00:14:41.279269 2791 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:41.279298 kubelet[2791]: I0314 00:14:41.279291 2791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:41.279442 kubelet[2791]: I0314 00:14:41.279312 2791 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:41.279503 kubelet[2791]: I0314 00:14:41.279444 2791 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:14:41.279533 kubelet[2791]: I0314 00:14:41.279504 2791 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:14:41.279533 kubelet[2791]: I0314 00:14:41.279524 2791 policy_none.go:49] "None policy: Start" Mar 14 00:14:41.279533 kubelet[2791]: I0314 00:14:41.279533 2791 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:41.279617 kubelet[2791]: I0314 00:14:41.279543 2791 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:41.279745 kubelet[2791]: I0314 00:14:41.279731 2791 state_mem.go:75] "Updated machine memory state" Mar 14 00:14:41.281673 kubelet[2791]: E0314 00:14:41.281082 2791 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:41.281673 kubelet[2791]: I0314 00:14:41.281243 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:41.281673 kubelet[2791]: I0314 00:14:41.281290 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:41.283261 kubelet[2791]: I0314 00:14:41.283232 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:41.286362 kubelet[2791]: E0314 00:14:41.286337 2791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:41.337652 kubelet[2791]: I0314 00:14:41.335934 2791 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.337652 kubelet[2791]: I0314 00:14:41.336064 2791 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.337652 kubelet[2791]: I0314 00:14:41.336776 2791 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.388613 kubelet[2791]: I0314 00:14:41.388354 2791 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.401855 kubelet[2791]: I0314 00:14:41.401224 2791 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.401855 kubelet[2791]: I0314 00:14:41.401355 2791 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.417723 kubelet[2791]: I0314 00:14:41.416295 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8fb36db7642ca8518e1dd1706010d6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-0dd818c04e\" (UID: \"fd8fb36db7642ca8518e1dd1706010d6\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.417723 kubelet[2791]: I0314 00:14:41.416356 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.417723 kubelet[2791]: I0314 00:14:41.416396 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.417723 kubelet[2791]: I0314 00:14:41.416425 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.417723 kubelet[2791]: I0314 00:14:41.416455 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.418072 kubelet[2791]: I0314 00:14:41.416531 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.418072 kubelet[2791]: I0314 00:14:41.416566 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.418072 kubelet[2791]: I0314 00:14:41.416621 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2353993256d95befefbbc352f9e7ce0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" (UID: \"a2353993256d95befefbbc352f9e7ce0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.418072 kubelet[2791]: I0314 00:14:41.416648 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2fe036f065d721bd062b61321e65680-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0dd818c04e\" (UID: \"d2fe036f065d721bd062b61321e65680\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:41.577819 sudo[2827]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:14:41.578196 sudo[2827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:14:42.100378 sudo[2827]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:42.170740 kubelet[2791]: I0314 00:14:42.170672 2791 apiserver.go:52] "Watching apiserver" Mar 14 00:14:42.216372 kubelet[2791]: I0314 00:14:42.216314 2791 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:42.255284 kubelet[2791]: I0314 00:14:42.255229 2791 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:42.266311 kubelet[2791]: E0314 00:14:42.266167 2791 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-0dd818c04e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" Mar 14 00:14:42.284492 kubelet[2791]: I0314 00:14:42.284357 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0dd818c04e" podStartSLOduration=1.284282961 podStartE2EDuration="1.284282961s" podCreationTimestamp="2026-03-14 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:42.284092747 +0000 UTC m=+1.187258964" watchObservedRunningTime="2026-03-14 00:14:42.284282961 +0000 UTC m=+1.187449218" Mar 14 00:14:42.314602 kubelet[2791]: I0314 00:14:42.314036 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0dd818c04e" podStartSLOduration=1.314015922 podStartE2EDuration="1.314015922s" podCreationTimestamp="2026-03-14 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:42.30048772 +0000 UTC m=+1.203653977" watchObservedRunningTime="2026-03-14 00:14:42.314015922 +0000 UTC m=+1.217182179" Mar 14 00:14:44.419458 sudo[1916]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:44.515187 sshd[1912]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:44.520795 systemd[1]: sshd@6-159.69.119.127:22-68.220.241.50:49718.service: Deactivated successfully. Mar 14 00:14:44.524385 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:14:44.524402 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:14:44.527602 systemd-logind[1583]: Removed session 7. Mar 14 00:14:46.661734 kubelet[2791]: I0314 00:14:46.661653 2791 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:14:46.662729 kubelet[2791]: I0314 00:14:46.662563 2791 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:14:46.662865 containerd[1618]: time="2026-03-14T00:14:46.662240941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:14:47.631066 kubelet[2791]: I0314 00:14:47.630159 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0dd818c04e" podStartSLOduration=6.63014323 podStartE2EDuration="6.63014323s" podCreationTimestamp="2026-03-14 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:42.315042718 +0000 UTC m=+1.218208975" watchObservedRunningTime="2026-03-14 00:14:47.63014323 +0000 UTC m=+6.533309487" Mar 14 00:14:47.659185 kubelet[2791]: I0314 00:14:47.659141 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7c5x\" (UniqueName: \"kubernetes.io/projected/944e649d-9753-4b27-a476-ef267ca83c9f-kube-api-access-p7c5x\") pod \"kube-proxy-pzr4t\" (UID: \"944e649d-9753-4b27-a476-ef267ca83c9f\") " pod="kube-system/kube-proxy-pzr4t" Mar 14 00:14:47.659393 kubelet[2791]: I0314 00:14:47.659363 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-run\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659429 kubelet[2791]: I0314 00:14:47.659400 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-bpf-maps\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659429 kubelet[2791]: I0314 00:14:47.659423 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-config-path\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659475 kubelet[2791]: I0314 00:14:47.659438 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-cgroup\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659475 kubelet[2791]: I0314 00:14:47.659451 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cni-path\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659475 kubelet[2791]: I0314 00:14:47.659465 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-net\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.659549 kubelet[2791]: I0314 00:14:47.659478 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-kernel\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.659492 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-hubble-tls\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.660380 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnfnj\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-kube-api-access-qnfnj\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.660405 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/944e649d-9753-4b27-a476-ef267ca83c9f-kube-proxy\") pod \"kube-proxy-pzr4t\" (UID: \"944e649d-9753-4b27-a476-ef267ca83c9f\") " pod="kube-system/kube-proxy-pzr4t" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.660424 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/944e649d-9753-4b27-a476-ef267ca83c9f-xtables-lock\") pod \"kube-proxy-pzr4t\" (UID: \"944e649d-9753-4b27-a476-ef267ca83c9f\") " pod="kube-system/kube-proxy-pzr4t" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.660439 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-hostproc\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660671 kubelet[2791]: I0314 00:14:47.660453 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-etc-cni-netd\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660894 kubelet[2791]: I0314 00:14:47.660467 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-lib-modules\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660894 kubelet[2791]: I0314 00:14:47.660480 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-xtables-lock\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.660894 kubelet[2791]: I0314 00:14:47.660498 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/944e649d-9753-4b27-a476-ef267ca83c9f-lib-modules\") pod \"kube-proxy-pzr4t\" (UID: \"944e649d-9753-4b27-a476-ef267ca83c9f\") " pod="kube-system/kube-proxy-pzr4t" Mar 14 00:14:47.660894 kubelet[2791]: I0314 00:14:47.660511 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e547a88e-d9a4-459a-a152-320dae3b5b92-clustermesh-secrets\") pod \"cilium-sj5cs\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " pod="kube-system/cilium-sj5cs" Mar 14 00:14:47.861731 kubelet[2791]: I0314 00:14:47.861548 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cmx5t\" (UID: \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\") " pod="kube-system/cilium-operator-6c4d7847fc-cmx5t" Mar 14 00:14:47.861731 kubelet[2791]: I0314 00:14:47.861641 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wd6k\" (UniqueName: \"kubernetes.io/projected/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-kube-api-access-2wd6k\") pod \"cilium-operator-6c4d7847fc-cmx5t\" (UID: \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\") " pod="kube-system/cilium-operator-6c4d7847fc-cmx5t" Mar 14 00:14:47.948711 containerd[1618]: time="2026-03-14T00:14:47.947976469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzr4t,Uid:944e649d-9753-4b27-a476-ef267ca83c9f,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:47.963868 containerd[1618]: time="2026-03-14T00:14:47.963812919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sj5cs,Uid:e547a88e-d9a4-459a-a152-320dae3b5b92,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:47.977825 containerd[1618]: time="2026-03-14T00:14:47.974894854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:47.977825 containerd[1618]: time="2026-03-14T00:14:47.975106428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:47.977825 containerd[1618]: time="2026-03-14T00:14:47.975123709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:47.977825 containerd[1618]: time="2026-03-14T00:14:47.975280000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.020950 containerd[1618]: time="2026-03-14T00:14:48.020770711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:48.020950 containerd[1618]: time="2026-03-14T00:14:48.020907760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:48.021171 containerd[1618]: time="2026-03-14T00:14:48.021130855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.021347 containerd[1618]: time="2026-03-14T00:14:48.021308266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.036460 containerd[1618]: time="2026-03-14T00:14:48.036422810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzr4t,Uid:944e649d-9753-4b27-a476-ef267ca83c9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd71e9e663f86d4335f69d57ac6cc06055222f42b7c7893d5d1ceb8fb6c9e9e0\"" Mar 14 00:14:48.044694 containerd[1618]: time="2026-03-14T00:14:48.044548138Z" level=info msg="CreateContainer within sandbox \"bd71e9e663f86d4335f69d57ac6cc06055222f42b7c7893d5d1ceb8fb6c9e9e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:14:48.067997 containerd[1618]: time="2026-03-14T00:14:48.067596038Z" level=info msg="CreateContainer within sandbox \"bd71e9e663f86d4335f69d57ac6cc06055222f42b7c7893d5d1ceb8fb6c9e9e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bbf5018136041153fd9188dab98342a522ba5dff0cc6b260f77b3480dc05bd6\"" Mar 14 00:14:48.068720 containerd[1618]: time="2026-03-14T00:14:48.068696149Z" level=info msg="StartContainer for \"5bbf5018136041153fd9188dab98342a522ba5dff0cc6b260f77b3480dc05bd6\"" Mar 14 00:14:48.078240 containerd[1618]: time="2026-03-14T00:14:48.078201448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sj5cs,Uid:e547a88e-d9a4-459a-a152-320dae3b5b92,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\"" Mar 14 00:14:48.080630 containerd[1618]: time="2026-03-14T00:14:48.080588083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:14:48.136932 containerd[1618]: time="2026-03-14T00:14:48.136882305Z" level=info msg="StartContainer for \"5bbf5018136041153fd9188dab98342a522ba5dff0cc6b260f77b3480dc05bd6\" returns successfully" Mar 14 00:14:48.144753 containerd[1618]: time="2026-03-14T00:14:48.144651451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cmx5t,Uid:6f8a2fcc-1c34-4933-9b05-3a82ca6f7633,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:48.174698 containerd[1618]: time="2026-03-14T00:14:48.174284219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:48.174838 containerd[1618]: time="2026-03-14T00:14:48.174763930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:48.176549 containerd[1618]: time="2026-03-14T00:14:48.174886938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.176549 containerd[1618]: time="2026-03-14T00:14:48.175139514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.228954 containerd[1618]: time="2026-03-14T00:14:48.228838448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cmx5t,Uid:6f8a2fcc-1c34-4933-9b05-3a82ca6f7633,Namespace:kube-system,Attempt:0,} returns sandbox id \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\"" Mar 14 00:14:51.678086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235622195.mount: Deactivated successfully. Mar 14 00:14:53.053518 containerd[1618]: time="2026-03-14T00:14:53.053438769Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.055325 containerd[1618]: time="2026-03-14T00:14:53.054698764Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 14 00:14:53.056469 containerd[1618]: time="2026-03-14T00:14:53.056415387Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.059420 containerd[1618]: time="2026-03-14T00:14:53.059387005Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.97846666s" Mar 14 00:14:53.059527 containerd[1618]: time="2026-03-14T00:14:53.059511772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 14 00:14:53.062289 containerd[1618]: time="2026-03-14T00:14:53.061897635Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:14:53.068422 containerd[1618]: time="2026-03-14T00:14:53.068337100Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:14:53.082371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556408110.mount: Deactivated successfully. Mar 14 00:14:53.086429 containerd[1618]: time="2026-03-14T00:14:53.086355018Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\"" Mar 14 00:14:53.088535 containerd[1618]: time="2026-03-14T00:14:53.087463285Z" level=info msg="StartContainer for \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\"" Mar 14 00:14:53.139608 containerd[1618]: time="2026-03-14T00:14:53.139194420Z" level=info msg="StartContainer for \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\" returns successfully" Mar 14 00:14:53.316472 kubelet[2791]: I0314 00:14:53.315798 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzr4t" podStartSLOduration=6.315780186 podStartE2EDuration="6.315780186s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:48.287820285 +0000 UTC m=+7.190986542" watchObservedRunningTime="2026-03-14 00:14:53.315780186 +0000 UTC m=+12.218946483" Mar 14 00:14:53.318211 containerd[1618]: time="2026-03-14T00:14:53.318144647Z" level=info msg="shim disconnected" id=86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621 namespace=k8s.io Mar 14 00:14:53.318211 containerd[1618]: time="2026-03-14T00:14:53.318200250Z" level=warning msg="cleaning up after shim disconnected" id=86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621 namespace=k8s.io Mar 14 00:14:53.318211 containerd[1618]: time="2026-03-14T00:14:53.318209211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:54.081743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621-rootfs.mount: Deactivated successfully. Mar 14 00:14:54.309289 containerd[1618]: time="2026-03-14T00:14:54.306535123Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:14:54.338476 containerd[1618]: time="2026-03-14T00:14:54.338258594Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\"" Mar 14 00:14:54.339680 containerd[1618]: time="2026-03-14T00:14:54.339638155Z" level=info msg="StartContainer for \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\"" Mar 14 00:14:54.425910 containerd[1618]: time="2026-03-14T00:14:54.424809578Z" level=info msg="StartContainer for \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\" returns successfully" Mar 14 00:14:54.438965 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:14:54.439774 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:54.439844 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:14:54.454370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:14:54.538402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:54.577446 containerd[1618]: time="2026-03-14T00:14:54.577373936Z" level=info msg="shim disconnected" id=465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c namespace=k8s.io Mar 14 00:14:54.577446 containerd[1618]: time="2026-03-14T00:14:54.577438700Z" level=warning msg="cleaning up after shim disconnected" id=465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c namespace=k8s.io Mar 14 00:14:54.577446 containerd[1618]: time="2026-03-14T00:14:54.577448540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:54.595553 containerd[1618]: time="2026-03-14T00:14:54.595440201Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:14:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:14:54.964973 containerd[1618]: time="2026-03-14T00:14:54.964832067Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.966035 containerd[1618]: time="2026-03-14T00:14:54.965753321Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 14 00:14:54.967009 containerd[1618]: time="2026-03-14T00:14:54.966965552Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:54.968896 containerd[1618]: time="2026-03-14T00:14:54.968751458Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.906033253s" Mar 14 00:14:54.968896 containerd[1618]: time="2026-03-14T00:14:54.968790180Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 14 00:14:54.973708 containerd[1618]: time="2026-03-14T00:14:54.973656307Z" level=info msg="CreateContainer within sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:14:54.988002 containerd[1618]: time="2026-03-14T00:14:54.987936069Z" level=info msg="CreateContainer within sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\"" Mar 14 00:14:54.989117 containerd[1618]: time="2026-03-14T00:14:54.989004532Z" level=info msg="StartContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\"" Mar 14 00:14:55.051345 containerd[1618]: time="2026-03-14T00:14:55.051280964Z" level=info msg="StartContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" returns successfully" Mar 14 00:14:55.080136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c-rootfs.mount: Deactivated successfully. Mar 14 00:14:55.317698 containerd[1618]: time="2026-03-14T00:14:55.317656220Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:14:55.352760 containerd[1618]: time="2026-03-14T00:14:55.352704058Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\"" Mar 14 00:14:55.355793 containerd[1618]: time="2026-03-14T00:14:55.353382138Z" level=info msg="StartContainer for \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\"" Mar 14 00:14:55.436484 kubelet[2791]: I0314 00:14:55.436155 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cmx5t" podStartSLOduration=1.697593759 podStartE2EDuration="8.436041106s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="2026-03-14 00:14:48.231416936 +0000 UTC m=+7.134583153" lastFinishedPulling="2026-03-14 00:14:54.969864243 +0000 UTC m=+13.873030500" observedRunningTime="2026-03-14 00:14:55.334311748 +0000 UTC m=+14.237478005" watchObservedRunningTime="2026-03-14 00:14:55.436041106 +0000 UTC m=+14.339207363" Mar 14 00:14:55.501653 containerd[1618]: time="2026-03-14T00:14:55.501382707Z" level=info msg="StartContainer for \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\" returns successfully" Mar 14 00:14:55.569948 containerd[1618]: time="2026-03-14T00:14:55.569808208Z" level=info msg="shim disconnected" id=e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10 namespace=k8s.io Mar 14 00:14:55.571727 containerd[1618]: time="2026-03-14T00:14:55.571685997Z" level=warning msg="cleaning up after shim disconnected" id=e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10 namespace=k8s.io Mar 14 00:14:55.571878 containerd[1618]: time="2026-03-14T00:14:55.571863887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:56.080356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10-rootfs.mount: Deactivated successfully. Mar 14 00:14:56.324890 containerd[1618]: time="2026-03-14T00:14:56.324641274Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:14:56.351931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219943064.mount: Deactivated successfully. Mar 14 00:14:56.359086 containerd[1618]: time="2026-03-14T00:14:56.359014008Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\"" Mar 14 00:14:56.363347 containerd[1618]: time="2026-03-14T00:14:56.363295413Z" level=info msg="StartContainer for \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\"" Mar 14 00:14:56.430140 containerd[1618]: time="2026-03-14T00:14:56.430098289Z" level=info msg="StartContainer for \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\" returns successfully" Mar 14 00:14:56.456806 containerd[1618]: time="2026-03-14T00:14:56.456736539Z" level=info msg="shim disconnected" id=450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a namespace=k8s.io Mar 14 00:14:56.456806 containerd[1618]: time="2026-03-14T00:14:56.456798662Z" level=warning msg="cleaning up after shim disconnected" id=450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a namespace=k8s.io Mar 14 00:14:56.456806 containerd[1618]: time="2026-03-14T00:14:56.456810583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:57.081561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a-rootfs.mount: Deactivated successfully. Mar 14 00:14:57.332669 containerd[1618]: time="2026-03-14T00:14:57.330178337Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:14:57.352879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003084097.mount: Deactivated successfully. Mar 14 00:14:57.355755 containerd[1618]: time="2026-03-14T00:14:57.355592458Z" level=info msg="CreateContainer within sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\"" Mar 14 00:14:57.357711 containerd[1618]: time="2026-03-14T00:14:57.356518271Z" level=info msg="StartContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\"" Mar 14 00:14:57.479902 containerd[1618]: time="2026-03-14T00:14:57.479848265Z" level=info msg="StartContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" returns successfully" Mar 14 00:14:57.601975 kubelet[2791]: I0314 00:14:57.601740 2791 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:14:57.739888 kubelet[2791]: I0314 00:14:57.739840 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7lq\" (UniqueName: \"kubernetes.io/projected/cf371d76-9578-4630-b5a6-64a41afb6007-kube-api-access-mx7lq\") pod \"coredns-674b8bbfcf-7fvph\" (UID: \"cf371d76-9578-4630-b5a6-64a41afb6007\") " pod="kube-system/coredns-674b8bbfcf-7fvph" Mar 14 00:14:57.740051 kubelet[2791]: I0314 00:14:57.739920 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4fa3df-2063-4e59-8eae-d2b591f579ff-config-volume\") pod \"coredns-674b8bbfcf-xq9fc\" (UID: \"aa4fa3df-2063-4e59-8eae-d2b591f579ff\") " pod="kube-system/coredns-674b8bbfcf-xq9fc" Mar 14 00:14:57.740051 kubelet[2791]: I0314 00:14:57.739952 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8p5\" (UniqueName: \"kubernetes.io/projected/aa4fa3df-2063-4e59-8eae-d2b591f579ff-kube-api-access-pn8p5\") pod \"coredns-674b8bbfcf-xq9fc\" (UID: \"aa4fa3df-2063-4e59-8eae-d2b591f579ff\") " pod="kube-system/coredns-674b8bbfcf-xq9fc" Mar 14 00:14:57.740051 kubelet[2791]: I0314 00:14:57.739978 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf371d76-9578-4630-b5a6-64a41afb6007-config-volume\") pod \"coredns-674b8bbfcf-7fvph\" (UID: \"cf371d76-9578-4630-b5a6-64a41afb6007\") " pod="kube-system/coredns-674b8bbfcf-7fvph" Mar 14 00:14:57.948008 containerd[1618]: time="2026-03-14T00:14:57.947872727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7fvph,Uid:cf371d76-9578-4630-b5a6-64a41afb6007,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:57.956875 containerd[1618]: time="2026-03-14T00:14:57.954755237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xq9fc,Uid:aa4fa3df-2063-4e59-8eae-d2b591f579ff,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:59.681998 systemd-networkd[1242]: cilium_host: Link UP Mar 14 00:14:59.682144 systemd-networkd[1242]: cilium_net: Link UP Mar 14 00:14:59.682148 systemd-networkd[1242]: cilium_net: Gained carrier Mar 14 00:14:59.682299 systemd-networkd[1242]: cilium_host: Gained carrier Mar 14 00:14:59.682507 systemd-networkd[1242]: cilium_host: Gained IPv6LL Mar 14 00:14:59.792913 systemd-networkd[1242]: cilium_vxlan: Link UP Mar 14 00:14:59.793231 systemd-networkd[1242]: cilium_vxlan: Gained carrier Mar 14 00:15:00.079610 kernel: NET: Registered PF_ALG protocol family Mar 14 00:15:00.341761 systemd-networkd[1242]: cilium_net: Gained IPv6LL Mar 14 00:15:00.870371 systemd-networkd[1242]: lxc_health: Link UP Mar 14 00:15:00.871182 systemd-networkd[1242]: lxc_health: Gained carrier Mar 14 00:15:00.980008 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Mar 14 00:15:01.032756 systemd-networkd[1242]: lxca8877cca90f8: Link UP Mar 14 00:15:01.039696 kernel: eth0: renamed from tmpf5d29 Mar 14 00:15:01.046199 systemd-networkd[1242]: lxce11b7982c357: Link UP Mar 14 00:15:01.056628 kernel: eth0: renamed from tmpa04db Mar 14 00:15:01.053931 systemd-networkd[1242]: lxca8877cca90f8: Gained carrier Mar 14 00:15:01.062945 systemd-networkd[1242]: lxce11b7982c357: Gained carrier Mar 14 00:15:02.010111 kubelet[2791]: I0314 00:15:02.009915 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sj5cs" podStartSLOduration=10.029115518 podStartE2EDuration="15.0098982s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="2026-03-14 00:14:48.080168816 +0000 UTC m=+6.983335073" lastFinishedPulling="2026-03-14 00:14:53.060951458 +0000 UTC m=+11.964117755" observedRunningTime="2026-03-14 00:14:58.349124532 +0000 UTC m=+17.252290909" watchObservedRunningTime="2026-03-14 00:15:02.0098982 +0000 UTC m=+20.913064457" Mar 14 00:15:02.068065 systemd-networkd[1242]: lxc_health: Gained IPv6LL Mar 14 00:15:02.260561 systemd-networkd[1242]: lxca8877cca90f8: Gained IPv6LL Mar 14 00:15:03.094810 systemd-networkd[1242]: lxce11b7982c357: Gained IPv6LL Mar 14 00:15:05.316393 containerd[1618]: time="2026-03-14T00:15:05.316003593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:05.316393 containerd[1618]: time="2026-03-14T00:15:05.316077237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:05.316393 containerd[1618]: time="2026-03-14T00:15:05.316093678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:05.316393 containerd[1618]: time="2026-03-14T00:15:05.316216964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:05.333853 containerd[1618]: time="2026-03-14T00:15:05.326948847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:05.333853 containerd[1618]: time="2026-03-14T00:15:05.327113816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:05.333853 containerd[1618]: time="2026-03-14T00:15:05.327130377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:05.333853 containerd[1618]: time="2026-03-14T00:15:05.327727888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:05.412981 containerd[1618]: time="2026-03-14T00:15:05.412915755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xq9fc,Uid:aa4fa3df-2063-4e59-8eae-d2b591f579ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5d294d2bc778652a468a9db38325fdb300d233349d83eb69876d1bdcc0c1041\"" Mar 14 00:15:05.427435 containerd[1618]: time="2026-03-14T00:15:05.426844005Z" level=info msg="CreateContainer within sandbox \"f5d294d2bc778652a468a9db38325fdb300d233349d83eb69876d1bdcc0c1041\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:05.458322 containerd[1618]: time="2026-03-14T00:15:05.458276293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7fvph,Uid:cf371d76-9578-4630-b5a6-64a41afb6007,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04db0adc8a0cd4295539979e623998f8d4cd821366ed6084230895b0da9175d\"" Mar 14 00:15:05.470771 containerd[1618]: time="2026-03-14T00:15:05.470719306Z" level=info msg="CreateContainer within sandbox \"f5d294d2bc778652a468a9db38325fdb300d233349d83eb69876d1bdcc0c1041\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96478a26ff64e9ee07ce8288a44ac6cd71584f1f0da3df952dcb2682bd27584c\"" Mar 14 00:15:05.472650 containerd[1618]: time="2026-03-14T00:15:05.472322070Z" level=info msg="CreateContainer within sandbox \"a04db0adc8a0cd4295539979e623998f8d4cd821366ed6084230895b0da9175d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:05.474338 containerd[1618]: time="2026-03-14T00:15:05.473894272Z" level=info msg="StartContainer for \"96478a26ff64e9ee07ce8288a44ac6cd71584f1f0da3df952dcb2682bd27584c\"" Mar 14 00:15:05.491668 containerd[1618]: time="2026-03-14T00:15:05.491517836Z" level=info msg="CreateContainer within sandbox \"a04db0adc8a0cd4295539979e623998f8d4cd821366ed6084230895b0da9175d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b48af05420c01617dc9b84fd972d31412ce5c4bec1b718209335a52316758534\"" Mar 14 00:15:05.492754 containerd[1618]: time="2026-03-14T00:15:05.492461286Z" level=info msg="StartContainer for \"b48af05420c01617dc9b84fd972d31412ce5c4bec1b718209335a52316758534\"" Mar 14 00:15:05.548852 containerd[1618]: time="2026-03-14T00:15:05.548803050Z" level=info msg="StartContainer for \"96478a26ff64e9ee07ce8288a44ac6cd71584f1f0da3df952dcb2682bd27584c\" returns successfully" Mar 14 00:15:05.601112 containerd[1618]: time="2026-03-14T00:15:05.600987938Z" level=info msg="StartContainer for \"b48af05420c01617dc9b84fd972d31412ce5c4bec1b718209335a52316758534\" returns successfully" Mar 14 00:15:06.329805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708421581.mount: Deactivated successfully. Mar 14 00:15:06.398886 kubelet[2791]: I0314 00:15:06.396978 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xq9fc" podStartSLOduration=19.396880131 podStartE2EDuration="19.396880131s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:06.376793467 +0000 UTC m=+25.279959724" watchObservedRunningTime="2026-03-14 00:15:06.396880131 +0000 UTC m=+25.300046348" Mar 14 00:15:06.401632 kubelet[2791]: I0314 00:15:06.400388 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7fvph" podStartSLOduration=19.400369982 podStartE2EDuration="19.400369982s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:06.397564173 +0000 UTC m=+25.300730430" watchObservedRunningTime="2026-03-14 00:15:06.400369982 +0000 UTC m=+25.303536239" Mar 14 00:15:06.516479 kubelet[2791]: I0314 00:15:06.516221 2791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:17:00.009900 systemd[1]: Started sshd@7-159.69.119.127:22-68.220.241.50:55430.service - OpenSSH per-connection server daemon (68.220.241.50:55430). Mar 14 00:17:00.594448 sshd[4179]: Accepted publickey for core from 68.220.241.50 port 55430 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:00.596944 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:00.606085 systemd-logind[1583]: New session 8 of user core. Mar 14 00:17:00.612211 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:17:01.108968 sshd[4179]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:01.113848 systemd[1]: sshd@7-159.69.119.127:22-68.220.241.50:55430.service: Deactivated successfully. Mar 14 00:17:01.118028 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:17:01.118171 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:17:01.120902 systemd-logind[1583]: Removed session 8. Mar 14 00:17:06.211061 systemd[1]: Started sshd@8-159.69.119.127:22-68.220.241.50:37528.service - OpenSSH per-connection server daemon (68.220.241.50:37528). Mar 14 00:17:06.798834 sshd[4194]: Accepted publickey for core from 68.220.241.50 port 37528 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:06.801040 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:06.807302 systemd-logind[1583]: New session 9 of user core. Mar 14 00:17:06.818095 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:17:07.294150 sshd[4194]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:07.299118 systemd[1]: sshd@8-159.69.119.127:22-68.220.241.50:37528.service: Deactivated successfully. Mar 14 00:17:07.304553 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:07.305712 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:07.306786 systemd-logind[1583]: Removed session 9. Mar 14 00:17:12.395904 systemd[1]: Started sshd@9-159.69.119.127:22-68.220.241.50:53488.service - OpenSSH per-connection server daemon (68.220.241.50:53488). Mar 14 00:17:12.994020 sshd[4208]: Accepted publickey for core from 68.220.241.50 port 53488 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:12.996832 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:13.004042 systemd-logind[1583]: New session 10 of user core. Mar 14 00:17:13.009939 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:17:13.485812 sshd[4208]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:13.489620 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:17:13.490717 systemd[1]: sshd@9-159.69.119.127:22-68.220.241.50:53488.service: Deactivated successfully. Mar 14 00:17:13.499845 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:17:13.502165 systemd-logind[1583]: Removed session 10. Mar 14 00:17:13.584028 systemd[1]: Started sshd@10-159.69.119.127:22-68.220.241.50:53492.service - OpenSSH per-connection server daemon (68.220.241.50:53492). Mar 14 00:17:14.168656 sshd[4222]: Accepted publickey for core from 68.220.241.50 port 53492 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:14.170328 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:14.175898 systemd-logind[1583]: New session 11 of user core. Mar 14 00:17:14.179883 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:17:14.710804 sshd[4222]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:14.715007 systemd[1]: sshd@10-159.69.119.127:22-68.220.241.50:53492.service: Deactivated successfully. Mar 14 00:17:14.715725 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:17:14.720015 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:17:14.721867 systemd-logind[1583]: Removed session 11. Mar 14 00:17:14.811905 systemd[1]: Started sshd@11-159.69.119.127:22-68.220.241.50:53500.service - OpenSSH per-connection server daemon (68.220.241.50:53500). Mar 14 00:17:15.395523 sshd[4234]: Accepted publickey for core from 68.220.241.50 port 53500 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:15.397741 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:15.407525 systemd-logind[1583]: New session 12 of user core. Mar 14 00:17:15.413545 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:17:15.883251 sshd[4234]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:15.891414 systemd[1]: sshd@11-159.69.119.127:22-68.220.241.50:53500.service: Deactivated successfully. Mar 14 00:17:15.897691 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:17:15.898229 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:17:15.900450 systemd-logind[1583]: Removed session 12. Mar 14 00:17:20.984151 systemd[1]: Started sshd@12-159.69.119.127:22-68.220.241.50:53516.service - OpenSSH per-connection server daemon (68.220.241.50:53516). Mar 14 00:17:21.568266 sshd[4250]: Accepted publickey for core from 68.220.241.50 port 53516 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:21.570682 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:21.577133 systemd-logind[1583]: New session 13 of user core. Mar 14 00:17:21.581881 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:17:22.058961 sshd[4250]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:22.065831 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:17:22.066492 systemd[1]: sshd@12-159.69.119.127:22-68.220.241.50:53516.service: Deactivated successfully. Mar 14 00:17:22.070628 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:17:22.072540 systemd-logind[1583]: Removed session 13. Mar 14 00:17:27.163072 systemd[1]: Started sshd@13-159.69.119.127:22-68.220.241.50:55816.service - OpenSSH per-connection server daemon (68.220.241.50:55816). Mar 14 00:17:27.747099 sshd[4264]: Accepted publickey for core from 68.220.241.50 port 55816 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:27.749060 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:27.753422 systemd-logind[1583]: New session 14 of user core. Mar 14 00:17:27.762929 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:17:28.235006 sshd[4264]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:28.241386 systemd[1]: sshd@13-159.69.119.127:22-68.220.241.50:55816.service: Deactivated successfully. Mar 14 00:17:28.247801 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:17:28.248347 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:17:28.249248 systemd-logind[1583]: Removed session 14. Mar 14 00:17:28.333880 systemd[1]: Started sshd@14-159.69.119.127:22-68.220.241.50:55824.service - OpenSSH per-connection server daemon (68.220.241.50:55824). Mar 14 00:17:28.915550 sshd[4277]: Accepted publickey for core from 68.220.241.50 port 55824 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:28.918348 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:28.924690 systemd-logind[1583]: New session 15 of user core. Mar 14 00:17:28.932248 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:17:29.456881 sshd[4277]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:29.463259 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:17:29.464716 systemd[1]: sshd@14-159.69.119.127:22-68.220.241.50:55824.service: Deactivated successfully. Mar 14 00:17:29.469127 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:17:29.470741 systemd-logind[1583]: Removed session 15. Mar 14 00:17:29.564218 systemd[1]: Started sshd@15-159.69.119.127:22-68.220.241.50:55826.service - OpenSSH per-connection server daemon (68.220.241.50:55826). Mar 14 00:17:30.151030 sshd[4289]: Accepted publickey for core from 68.220.241.50 port 55826 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:30.153452 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:30.160700 systemd-logind[1583]: New session 16 of user core. Mar 14 00:17:30.169353 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:17:31.239686 sshd[4289]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:31.245538 systemd[1]: sshd@15-159.69.119.127:22-68.220.241.50:55826.service: Deactivated successfully. Mar 14 00:17:31.250589 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:17:31.252846 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:17:31.254455 systemd-logind[1583]: Removed session 16. Mar 14 00:17:31.341933 systemd[1]: Started sshd@16-159.69.119.127:22-68.220.241.50:55836.service - OpenSSH per-connection server daemon (68.220.241.50:55836). Mar 14 00:17:31.928675 sshd[4308]: Accepted publickey for core from 68.220.241.50 port 55836 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:31.931303 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:31.938155 systemd-logind[1583]: New session 17 of user core. Mar 14 00:17:31.943134 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:17:32.546775 sshd[4308]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:32.554024 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:17:32.555009 systemd[1]: sshd@16-159.69.119.127:22-68.220.241.50:55836.service: Deactivated successfully. Mar 14 00:17:32.558334 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:17:32.561613 systemd-logind[1583]: Removed session 17. Mar 14 00:17:32.648864 systemd[1]: Started sshd@17-159.69.119.127:22-68.220.241.50:37060.service - OpenSSH per-connection server daemon (68.220.241.50:37060). Mar 14 00:17:33.246638 sshd[4320]: Accepted publickey for core from 68.220.241.50 port 37060 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:33.249037 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:33.255724 systemd-logind[1583]: New session 18 of user core. Mar 14 00:17:33.261035 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:17:33.739470 sshd[4320]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:33.747871 systemd[1]: sshd@17-159.69.119.127:22-68.220.241.50:37060.service: Deactivated successfully. Mar 14 00:17:33.753346 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:17:33.755688 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:17:33.757114 systemd-logind[1583]: Removed session 18. Mar 14 00:17:38.830423 update_engine[1590]: I20260314 00:17:38.830286 1590 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:17:38.830423 update_engine[1590]: I20260314 00:17:38.830364 1590 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:17:38.831196 update_engine[1590]: I20260314 00:17:38.830780 1590 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831344 1590 omaha_request_params.cc:62] Current group set to lts Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831503 1590 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831521 1590 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831543 1590 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831609 1590 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831693 1590 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831707 1590 omaha_request_action.cc:272] Request: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: Mar 14 00:17:38.832911 update_engine[1590]: I20260314 00:17:38.831718 1590 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:17:38.834849 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:17:38.836442 update_engine[1590]: I20260314 00:17:38.835336 1590 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:17:38.836442 update_engine[1590]: I20260314 00:17:38.836014 1590 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:17:38.837729 update_engine[1590]: E20260314 00:17:38.836699 1590 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:17:38.837729 update_engine[1590]: I20260314 00:17:38.837702 1590 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:17:38.844116 systemd[1]: Started sshd@18-159.69.119.127:22-68.220.241.50:37068.service - OpenSSH per-connection server daemon (68.220.241.50:37068). Mar 14 00:17:39.430645 sshd[4336]: Accepted publickey for core from 68.220.241.50 port 37068 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:39.432479 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:39.437995 systemd-logind[1583]: New session 19 of user core. Mar 14 00:17:39.440993 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:17:39.916960 sshd[4336]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:39.923650 systemd[1]: sshd@18-159.69.119.127:22-68.220.241.50:37068.service: Deactivated successfully. Mar 14 00:17:39.929200 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:17:39.930180 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:17:39.931107 systemd-logind[1583]: Removed session 19. Mar 14 00:17:45.016859 systemd[1]: Started sshd@19-159.69.119.127:22-68.220.241.50:44960.service - OpenSSH per-connection server daemon (68.220.241.50:44960). Mar 14 00:17:45.607667 sshd[4351]: Accepted publickey for core from 68.220.241.50 port 44960 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:45.610162 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:45.615628 systemd-logind[1583]: New session 20 of user core. Mar 14 00:17:45.623045 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:17:46.093763 sshd[4351]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:46.098407 systemd[1]: sshd@19-159.69.119.127:22-68.220.241.50:44960.service: Deactivated successfully. Mar 14 00:17:46.103114 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:17:46.104341 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:17:46.105434 systemd-logind[1583]: Removed session 20. Mar 14 00:17:46.195023 systemd[1]: Started sshd@20-159.69.119.127:22-68.220.241.50:44968.service - OpenSSH per-connection server daemon (68.220.241.50:44968). Mar 14 00:17:46.779623 sshd[4364]: Accepted publickey for core from 68.220.241.50 port 44968 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:46.781291 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:46.786348 systemd-logind[1583]: New session 21 of user core. Mar 14 00:17:46.794520 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:17:48.832182 update_engine[1590]: I20260314 00:17:48.831270 1590 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:17:48.832182 update_engine[1590]: I20260314 00:17:48.831723 1590 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:17:48.832182 update_engine[1590]: I20260314 00:17:48.832041 1590 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:17:48.833375 update_engine[1590]: E20260314 00:17:48.833323 1590 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:17:48.833423 update_engine[1590]: I20260314 00:17:48.833402 1590 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:17:49.111834 containerd[1618]: time="2026-03-14T00:17:49.110520003Z" level=info msg="StopContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" with timeout 30 (s)" Mar 14 00:17:49.111834 containerd[1618]: time="2026-03-14T00:17:49.111142483Z" level=info msg="Stop container \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" with signal terminated" Mar 14 00:17:49.143048 containerd[1618]: time="2026-03-14T00:17:49.142980075Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:17:49.155054 containerd[1618]: time="2026-03-14T00:17:49.154689312Z" level=info msg="StopContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" with timeout 2 (s)" Mar 14 00:17:49.156619 containerd[1618]: time="2026-03-14T00:17:49.156312232Z" level=info msg="Stop container \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" with signal terminated" Mar 14 00:17:49.160091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44-rootfs.mount: Deactivated successfully. Mar 14 00:17:49.168112 systemd-networkd[1242]: lxc_health: Link DOWN Mar 14 00:17:49.168119 systemd-networkd[1242]: lxc_health: Lost carrier Mar 14 00:17:49.175237 containerd[1618]: time="2026-03-14T00:17:49.174728908Z" level=info msg="shim disconnected" id=cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44 namespace=k8s.io Mar 14 00:17:49.175237 containerd[1618]: time="2026-03-14T00:17:49.174791988Z" level=warning msg="cleaning up after shim disconnected" id=cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44 namespace=k8s.io Mar 14 00:17:49.175237 containerd[1618]: time="2026-03-14T00:17:49.174804748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:49.204354 containerd[1618]: time="2026-03-14T00:17:49.204278061Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:49.213376 containerd[1618]: time="2026-03-14T00:17:49.212888699Z" level=info msg="StopContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" returns successfully" Mar 14 00:17:49.213304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0-rootfs.mount: Deactivated successfully. Mar 14 00:17:49.215911 containerd[1618]: time="2026-03-14T00:17:49.215870658Z" level=info msg="StopPodSandbox for \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\"" Mar 14 00:17:49.216929 containerd[1618]: time="2026-03-14T00:17:49.216014058Z" level=info msg="Container to stop \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.219756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89-shm.mount: Deactivated successfully. Mar 14 00:17:49.221986 containerd[1618]: time="2026-03-14T00:17:49.221798217Z" level=info msg="shim disconnected" id=d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0 namespace=k8s.io Mar 14 00:17:49.221986 containerd[1618]: time="2026-03-14T00:17:49.221850377Z" level=warning msg="cleaning up after shim disconnected" id=d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0 namespace=k8s.io Mar 14 00:17:49.221986 containerd[1618]: time="2026-03-14T00:17:49.221858537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:49.242002 containerd[1618]: time="2026-03-14T00:17:49.241929092Z" level=info msg="StopContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" returns successfully" Mar 14 00:17:49.243290 containerd[1618]: time="2026-03-14T00:17:49.243150252Z" level=info msg="StopPodSandbox for \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\"" Mar 14 00:17:49.243290 containerd[1618]: time="2026-03-14T00:17:49.243191572Z" level=info msg="Container to stop \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.243290 containerd[1618]: time="2026-03-14T00:17:49.243204052Z" level=info msg="Container to stop \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.243668 containerd[1618]: time="2026-03-14T00:17:49.243215692Z" level=info msg="Container to stop \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.243668 containerd[1618]: time="2026-03-14T00:17:49.243381812Z" level=info msg="Container to stop \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.243668 containerd[1618]: time="2026-03-14T00:17:49.243394932Z" level=info msg="Container to stop \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:17:49.263342 containerd[1618]: time="2026-03-14T00:17:49.263064447Z" level=info msg="shim disconnected" id=86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89 namespace=k8s.io Mar 14 00:17:49.263342 containerd[1618]: time="2026-03-14T00:17:49.263132887Z" level=warning msg="cleaning up after shim disconnected" id=86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89 namespace=k8s.io Mar 14 00:17:49.263342 containerd[1618]: time="2026-03-14T00:17:49.263141407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:49.284398 containerd[1618]: time="2026-03-14T00:17:49.283914282Z" level=info msg="TearDown network for sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" successfully" Mar 14 00:17:49.284398 containerd[1618]: time="2026-03-14T00:17:49.283947962Z" level=info msg="StopPodSandbox for \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" returns successfully" Mar 14 00:17:49.287780 containerd[1618]: time="2026-03-14T00:17:49.287708201Z" level=info msg="shim disconnected" id=5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790 namespace=k8s.io Mar 14 00:17:49.287780 containerd[1618]: time="2026-03-14T00:17:49.287772321Z" level=warning msg="cleaning up after shim disconnected" id=5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790 namespace=k8s.io Mar 14 00:17:49.287891 containerd[1618]: time="2026-03-14T00:17:49.287789001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:49.307856 containerd[1618]: time="2026-03-14T00:17:49.307813917Z" level=info msg="TearDown network for sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" successfully" Mar 14 00:17:49.307856 containerd[1618]: time="2026-03-14T00:17:49.307852116Z" level=info msg="StopPodSandbox for \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" returns successfully" Mar 14 00:17:49.384595 kubelet[2791]: I0314 00:17:49.381823 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-cilium-config-path\") pod \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\" (UID: \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\") " Mar 14 00:17:49.384595 kubelet[2791]: I0314 00:17:49.381918 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wd6k\" (UniqueName: \"kubernetes.io/projected/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-kube-api-access-2wd6k\") pod \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\" (UID: \"6f8a2fcc-1c34-4933-9b05-3a82ca6f7633\") " Mar 14 00:17:49.387264 kubelet[2791]: I0314 00:17:49.387008 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-kube-api-access-2wd6k" (OuterVolumeSpecName: "kube-api-access-2wd6k") pod "6f8a2fcc-1c34-4933-9b05-3a82ca6f7633" (UID: "6f8a2fcc-1c34-4933-9b05-3a82ca6f7633"). InnerVolumeSpecName "kube-api-access-2wd6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:17:49.390648 kubelet[2791]: I0314 00:17:49.390535 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f8a2fcc-1c34-4933-9b05-3a82ca6f7633" (UID: "6f8a2fcc-1c34-4933-9b05-3a82ca6f7633"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.482865 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnfnj\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-kube-api-access-qnfnj\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.482945 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-etc-cni-netd\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.483029 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-config-path\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.483065 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cni-path\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.483098 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-hostproc\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484269 kubelet[2791]: I0314 00:17:49.483128 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-net\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483171 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-hubble-tls\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483208 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-kernel\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483266 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-xtables-lock\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483300 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-cgroup\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483343 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e547a88e-d9a4-459a-a152-320dae3b5b92-clustermesh-secrets\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.484789 kubelet[2791]: I0314 00:17:49.483378 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-run\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.485111 kubelet[2791]: I0314 00:17:49.483408 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-bpf-maps\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.485111 kubelet[2791]: I0314 00:17:49.483449 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-lib-modules\") pod \"e547a88e-d9a4-459a-a152-320dae3b5b92\" (UID: \"e547a88e-d9a4-459a-a152-320dae3b5b92\") " Mar 14 00:17:49.485111 kubelet[2791]: I0314 00:17:49.483529 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-cilium-config-path\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.485111 kubelet[2791]: I0314 00:17:49.483561 2791 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2wd6k\" (UniqueName: \"kubernetes.io/projected/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633-kube-api-access-2wd6k\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.485111 kubelet[2791]: I0314 00:17:49.483735 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.485502 kubelet[2791]: I0314 00:17:49.485337 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.488954 kubelet[2791]: I0314 00:17:49.488770 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cni-path" (OuterVolumeSpecName: "cni-path") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.488954 kubelet[2791]: I0314 00:17:49.488835 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-hostproc" (OuterVolumeSpecName: "hostproc") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.488954 kubelet[2791]: I0314 00:17:49.488859 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.489612 kubelet[2791]: I0314 00:17:49.489205 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.489612 kubelet[2791]: I0314 00:17:49.489314 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.489612 kubelet[2791]: I0314 00:17:49.489372 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.489612 kubelet[2791]: I0314 00:17:49.489408 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.489925 kubelet[2791]: I0314 00:17:49.489739 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:17:49.491760 kubelet[2791]: I0314 00:17:49.491711 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-kube-api-access-qnfnj" (OuterVolumeSpecName: "kube-api-access-qnfnj") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "kube-api-access-qnfnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:17:49.492023 kubelet[2791]: I0314 00:17:49.491992 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:17:49.493072 kubelet[2791]: I0314 00:17:49.492793 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:17:49.494525 kubelet[2791]: I0314 00:17:49.494475 2791 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e547a88e-d9a4-459a-a152-320dae3b5b92-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e547a88e-d9a4-459a-a152-320dae3b5b92" (UID: "e547a88e-d9a4-459a-a152-320dae3b5b92"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585410 2791 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-lib-modules\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585466 2791 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qnfnj\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-kube-api-access-qnfnj\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585487 2791 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-etc-cni-netd\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585504 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-config-path\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585520 2791 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cni-path\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585537 2791 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-hostproc\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.585664 kubelet[2791]: I0314 00:17:49.585554 2791 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-net\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586116 2791 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e547a88e-d9a4-459a-a152-320dae3b5b92-hubble-tls\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586158 2791 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586177 2791 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-xtables-lock\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586241 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-cgroup\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586264 2791 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e547a88e-d9a4-459a-a152-320dae3b5b92-clustermesh-secrets\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586298 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-cilium-run\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.586340 kubelet[2791]: I0314 00:17:49.586314 2791 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e547a88e-d9a4-459a-a152-320dae3b5b92-bpf-maps\") on node \"ci-4081-3-6-n-0dd818c04e\" DevicePath \"\"" Mar 14 00:17:49.783116 kubelet[2791]: I0314 00:17:49.782859 2791 scope.go:117] "RemoveContainer" containerID="cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44" Mar 14 00:17:49.788628 containerd[1618]: time="2026-03-14T00:17:49.788193924Z" level=info msg="RemoveContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\"" Mar 14 00:17:49.795788 containerd[1618]: time="2026-03-14T00:17:49.795753922Z" level=info msg="RemoveContainer for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" returns successfully" Mar 14 00:17:49.796131 kubelet[2791]: I0314 00:17:49.796112 2791 scope.go:117] "RemoveContainer" containerID="cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44" Mar 14 00:17:49.796480 containerd[1618]: time="2026-03-14T00:17:49.796448642Z" level=error msg="ContainerStatus for \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\": not found" Mar 14 00:17:49.796692 kubelet[2791]: E0314 00:17:49.796586 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\": not found" containerID="cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44" Mar 14 00:17:49.796692 kubelet[2791]: I0314 00:17:49.796613 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44"} err="failed to get container status \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbc00fe367e735dacdc92e3180e2a1d62506c524e67127f8a6286b7f23f84c44\": not found" Mar 14 00:17:49.796692 kubelet[2791]: I0314 00:17:49.796648 2791 scope.go:117] "RemoveContainer" containerID="d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0" Mar 14 00:17:49.799528 containerd[1618]: time="2026-03-14T00:17:49.799427121Z" level=info msg="RemoveContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\"" Mar 14 00:17:49.804783 containerd[1618]: time="2026-03-14T00:17:49.804728160Z" level=info msg="RemoveContainer for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" returns successfully" Mar 14 00:17:49.805167 kubelet[2791]: I0314 00:17:49.804944 2791 scope.go:117] "RemoveContainer" containerID="450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a" Mar 14 00:17:49.806704 containerd[1618]: time="2026-03-14T00:17:49.806671399Z" level=info msg="RemoveContainer for \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\"" Mar 14 00:17:49.810189 containerd[1618]: time="2026-03-14T00:17:49.810152839Z" level=info msg="RemoveContainer for \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\" returns successfully" Mar 14 00:17:49.810914 kubelet[2791]: I0314 00:17:49.810412 2791 scope.go:117] "RemoveContainer" containerID="e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10" Mar 14 00:17:49.812349 containerd[1618]: time="2026-03-14T00:17:49.812316078Z" level=info msg="RemoveContainer for \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\"" Mar 14 00:17:49.818439 containerd[1618]: time="2026-03-14T00:17:49.818113917Z" level=info msg="RemoveContainer for \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\" returns successfully" Mar 14 00:17:49.818568 kubelet[2791]: I0314 00:17:49.818402 2791 scope.go:117] "RemoveContainer" containerID="465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c" Mar 14 00:17:49.821287 containerd[1618]: time="2026-03-14T00:17:49.821260356Z" level=info msg="RemoveContainer for \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\"" Mar 14 00:17:49.824518 containerd[1618]: time="2026-03-14T00:17:49.824487595Z" level=info msg="RemoveContainer for \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\" returns successfully" Mar 14 00:17:49.824885 kubelet[2791]: I0314 00:17:49.824783 2791 scope.go:117] "RemoveContainer" containerID="86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621" Mar 14 00:17:49.826168 containerd[1618]: time="2026-03-14T00:17:49.826141795Z" level=info msg="RemoveContainer for \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\"" Mar 14 00:17:49.829615 containerd[1618]: time="2026-03-14T00:17:49.829564754Z" level=info msg="RemoveContainer for \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\" returns successfully" Mar 14 00:17:49.830484 kubelet[2791]: I0314 00:17:49.830467 2791 scope.go:117] "RemoveContainer" containerID="d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0" Mar 14 00:17:49.830924 containerd[1618]: time="2026-03-14T00:17:49.830867474Z" level=error msg="ContainerStatus for \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\": not found" Mar 14 00:17:49.831287 kubelet[2791]: E0314 00:17:49.831120 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\": not found" containerID="d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0" Mar 14 00:17:49.831287 kubelet[2791]: I0314 00:17:49.831153 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0"} err="failed to get container status \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d83b932d3106c76335170df5f6c7118dcaa47de70f6391033aa047094eb51cd0\": not found" Mar 14 00:17:49.831287 kubelet[2791]: I0314 00:17:49.831172 2791 scope.go:117] "RemoveContainer" containerID="450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a" Mar 14 00:17:49.831799 containerd[1618]: time="2026-03-14T00:17:49.831523354Z" level=error msg="ContainerStatus for \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\": not found" Mar 14 00:17:49.832203 kubelet[2791]: E0314 00:17:49.831750 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\": not found" containerID="450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a" Mar 14 00:17:49.832203 kubelet[2791]: I0314 00:17:49.832020 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a"} err="failed to get container status \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\": rpc error: code = NotFound desc = an error occurred when try to find container \"450e5a45e1067d1338fb205485c25586547933075320b29632d07e8430e9f77a\": not found" Mar 14 00:17:49.832203 kubelet[2791]: I0314 00:17:49.832038 2791 scope.go:117] "RemoveContainer" containerID="e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10" Mar 14 00:17:49.832932 containerd[1618]: time="2026-03-14T00:17:49.832552033Z" level=error msg="ContainerStatus for \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\": not found" Mar 14 00:17:49.832997 kubelet[2791]: E0314 00:17:49.832825 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\": not found" containerID="e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10" Mar 14 00:17:49.832997 kubelet[2791]: I0314 00:17:49.832864 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10"} err="failed to get container status \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9c53464512992bbff467d3cb108b5ec71f233829bfcf908b8e709b2c4ce3f10\": not found" Mar 14 00:17:49.832997 kubelet[2791]: I0314 00:17:49.832881 2791 scope.go:117] "RemoveContainer" containerID="465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c" Mar 14 00:17:49.833439 containerd[1618]: time="2026-03-14T00:17:49.833179673Z" level=error msg="ContainerStatus for \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\": not found" Mar 14 00:17:49.833492 kubelet[2791]: E0314 00:17:49.833311 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\": not found" containerID="465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c" Mar 14 00:17:49.833492 kubelet[2791]: I0314 00:17:49.833376 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c"} err="failed to get container status \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\": rpc error: code = NotFound desc = an error occurred when try to find container \"465ddd5dda90a8b021753eb4ab8691fa12941c4f129a09653e6d9d8afdb7189c\": not found" Mar 14 00:17:49.833492 kubelet[2791]: I0314 00:17:49.833392 2791 scope.go:117] "RemoveContainer" containerID="86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621" Mar 14 00:17:49.833978 containerd[1618]: time="2026-03-14T00:17:49.833823033Z" level=error msg="ContainerStatus for \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\": not found" Mar 14 00:17:49.834037 kubelet[2791]: E0314 00:17:49.833925 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\": not found" containerID="86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621" Mar 14 00:17:49.834037 kubelet[2791]: I0314 00:17:49.833946 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621"} err="failed to get container status \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\": rpc error: code = NotFound desc = an error occurred when try to find container \"86b11ba5bce57ebfbc476d94e70c437a4e97e73b8b9c14ed4ac707af106eb621\": not found" Mar 14 00:17:50.122963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89-rootfs.mount: Deactivated successfully. Mar 14 00:17:50.123242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790-rootfs.mount: Deactivated successfully. Mar 14 00:17:50.123426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790-shm.mount: Deactivated successfully. Mar 14 00:17:50.125759 systemd[1]: var-lib-kubelet-pods-6f8a2fcc\x2d1c34\x2d4933\x2d9b05\x2d3a82ca6f7633-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wd6k.mount: Deactivated successfully. Mar 14 00:17:50.126049 systemd[1]: var-lib-kubelet-pods-e547a88e\x2dd9a4\x2d459a\x2da152\x2d320dae3b5b92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnfnj.mount: Deactivated successfully. Mar 14 00:17:50.126247 systemd[1]: var-lib-kubelet-pods-e547a88e\x2dd9a4\x2d459a\x2da152\x2d320dae3b5b92-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:17:50.126438 systemd[1]: var-lib-kubelet-pods-e547a88e\x2dd9a4\x2d459a\x2da152\x2d320dae3b5b92-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:17:51.142018 sshd[4364]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:51.148470 systemd[1]: sshd@20-159.69.119.127:22-68.220.241.50:44968.service: Deactivated successfully. Mar 14 00:17:51.152768 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:17:51.155179 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:17:51.156463 systemd-logind[1583]: Removed session 21. Mar 14 00:17:51.238933 kubelet[2791]: I0314 00:17:51.238881 2791 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8a2fcc-1c34-4933-9b05-3a82ca6f7633" path="/var/lib/kubelet/pods/6f8a2fcc-1c34-4933-9b05-3a82ca6f7633/volumes" Mar 14 00:17:51.239390 kubelet[2791]: I0314 00:17:51.239363 2791 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e547a88e-d9a4-459a-a152-320dae3b5b92" path="/var/lib/kubelet/pods/e547a88e-d9a4-459a-a152-320dae3b5b92/volumes" Mar 14 00:17:51.246979 systemd[1]: Started sshd@21-159.69.119.127:22-68.220.241.50:44974.service - OpenSSH per-connection server daemon (68.220.241.50:44974). Mar 14 00:17:51.345675 kubelet[2791]: E0314 00:17:51.345617 2791 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:17:51.833777 sshd[4536]: Accepted publickey for core from 68.220.241.50 port 44974 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:51.836527 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:51.842767 systemd-logind[1583]: New session 22 of user core. Mar 14 00:17:51.853023 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:17:52.974299 sshd[4536]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:52.989753 systemd[1]: sshd@21-159.69.119.127:22-68.220.241.50:44974.service: Deactivated successfully. Mar 14 00:17:52.991677 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:17:53.001738 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:17:53.008642 kubelet[2791]: I0314 00:17:53.008602 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-cni-path\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.008642 kubelet[2791]: I0314 00:17:53.008642 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-xtables-lock\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009050 kubelet[2791]: I0314 00:17:53.008665 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-cilium-config-path\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009050 kubelet[2791]: I0314 00:17:53.008680 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-host-proc-sys-net\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009050 kubelet[2791]: I0314 00:17:53.008713 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-cilium-cgroup\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009050 kubelet[2791]: I0314 00:17:53.008732 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-cilium-run\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009050 kubelet[2791]: I0314 00:17:53.008749 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-clustermesh-secrets\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008766 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-host-proc-sys-kernel\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008784 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-hubble-tls\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008799 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-etc-cni-netd\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008814 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-lib-modules\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008828 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-hostproc\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009164 kubelet[2791]: I0314 00:17:53.008843 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46c6\" (UniqueName: \"kubernetes.io/projected/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-kube-api-access-z46c6\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009326 kubelet[2791]: I0314 00:17:53.008861 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-bpf-maps\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.009326 kubelet[2791]: I0314 00:17:53.008875 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5db48a3-9ab7-4c5c-8754-28cbb476d5bf-cilium-ipsec-secrets\") pod \"cilium-62p4t\" (UID: \"b5db48a3-9ab7-4c5c-8754-28cbb476d5bf\") " pod="kube-system/cilium-62p4t" Mar 14 00:17:53.010141 systemd-logind[1583]: Removed session 22. Mar 14 00:17:53.075083 systemd[1]: Started sshd@22-159.69.119.127:22-68.220.241.50:49414.service - OpenSSH per-connection server daemon (68.220.241.50:49414). Mar 14 00:17:53.228320 containerd[1618]: time="2026-03-14T00:17:53.228100159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62p4t,Uid:b5db48a3-9ab7-4c5c-8754-28cbb476d5bf,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:53.258232 containerd[1618]: time="2026-03-14T00:17:53.257858254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:53.258232 containerd[1618]: time="2026-03-14T00:17:53.257939654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:53.258232 containerd[1618]: time="2026-03-14T00:17:53.257965214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:53.258232 containerd[1618]: time="2026-03-14T00:17:53.258096494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:53.299606 containerd[1618]: time="2026-03-14T00:17:53.299430811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62p4t,Uid:b5db48a3-9ab7-4c5c-8754-28cbb476d5bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\"" Mar 14 00:17:53.309256 containerd[1618]: time="2026-03-14T00:17:53.309209469Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:17:53.321808 containerd[1618]: time="2026-03-14T00:17:53.321752332Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e5fffd8571dfb3f2dc840f1a75d20e4444750c03bb91a161c0f243ad5cff5e2\"" Mar 14 00:17:53.322532 containerd[1618]: time="2026-03-14T00:17:53.322481453Z" level=info msg="StartContainer for \"7e5fffd8571dfb3f2dc840f1a75d20e4444750c03bb91a161c0f243ad5cff5e2\"" Mar 14 00:17:53.382162 containerd[1618]: time="2026-03-14T00:17:53.382118044Z" level=info msg="StartContainer for \"7e5fffd8571dfb3f2dc840f1a75d20e4444750c03bb91a161c0f243ad5cff5e2\" returns successfully" Mar 14 00:17:53.437855 containerd[1618]: time="2026-03-14T00:17:53.437662907Z" level=info msg="shim disconnected" id=7e5fffd8571dfb3f2dc840f1a75d20e4444750c03bb91a161c0f243ad5cff5e2 namespace=k8s.io Mar 14 00:17:53.437855 containerd[1618]: time="2026-03-14T00:17:53.437736387Z" level=warning msg="cleaning up after shim disconnected" id=7e5fffd8571dfb3f2dc840f1a75d20e4444750c03bb91a161c0f243ad5cff5e2 namespace=k8s.io Mar 14 00:17:53.437855 containerd[1618]: time="2026-03-14T00:17:53.437747787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:53.665051 sshd[4548]: Accepted publickey for core from 68.220.241.50 port 49414 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:53.667429 sshd[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:53.672700 systemd-logind[1583]: New session 23 of user core. Mar 14 00:17:53.680217 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:17:53.817609 containerd[1618]: time="2026-03-14T00:17:53.817538210Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:17:53.833660 containerd[1618]: time="2026-03-14T00:17:53.833532479Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6c3248d6119e7192aa458761415e5f961a3872278039aa9a0045e48b0def47d\"" Mar 14 00:17:53.836959 containerd[1618]: time="2026-03-14T00:17:53.834760722Z" level=info msg="StartContainer for \"a6c3248d6119e7192aa458761415e5f961a3872278039aa9a0045e48b0def47d\"" Mar 14 00:17:53.896803 containerd[1618]: time="2026-03-14T00:17:53.896725876Z" level=info msg="StartContainer for \"a6c3248d6119e7192aa458761415e5f961a3872278039aa9a0045e48b0def47d\" returns successfully" Mar 14 00:17:53.934343 containerd[1618]: time="2026-03-14T00:17:53.934189106Z" level=info msg="shim disconnected" id=a6c3248d6119e7192aa458761415e5f961a3872278039aa9a0045e48b0def47d namespace=k8s.io Mar 14 00:17:53.934650 containerd[1618]: time="2026-03-14T00:17:53.934603827Z" level=warning msg="cleaning up after shim disconnected" id=a6c3248d6119e7192aa458761415e5f961a3872278039aa9a0045e48b0def47d namespace=k8s.io Mar 14 00:17:53.934751 containerd[1618]: time="2026-03-14T00:17:53.934728787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:54.080969 sshd[4548]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:54.088135 systemd[1]: sshd@22-159.69.119.127:22-68.220.241.50:49414.service: Deactivated successfully. Mar 14 00:17:54.092070 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:17:54.093715 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:17:54.094920 systemd-logind[1583]: Removed session 23. Mar 14 00:17:54.182209 systemd[1]: Started sshd@23-159.69.119.127:22-68.220.241.50:49430.service - OpenSSH per-connection server daemon (68.220.241.50:49430). Mar 14 00:17:54.768811 sshd[4725]: Accepted publickey for core from 68.220.241.50 port 49430 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:17:54.771699 sshd[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:54.777296 systemd-logind[1583]: New session 24 of user core. Mar 14 00:17:54.779886 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:17:54.825326 containerd[1618]: time="2026-03-14T00:17:54.824188166Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:17:54.851600 containerd[1618]: time="2026-03-14T00:17:54.851524070Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031\"" Mar 14 00:17:54.854860 containerd[1618]: time="2026-03-14T00:17:54.854561797Z" level=info msg="StartContainer for \"b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031\"" Mar 14 00:17:54.935600 containerd[1618]: time="2026-03-14T00:17:54.933821583Z" level=info msg="StartContainer for \"b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031\" returns successfully" Mar 14 00:17:54.963293 containerd[1618]: time="2026-03-14T00:17:54.963116132Z" level=info msg="shim disconnected" id=b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031 namespace=k8s.io Mar 14 00:17:54.963293 containerd[1618]: time="2026-03-14T00:17:54.963194973Z" level=warning msg="cleaning up after shim disconnected" id=b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031 namespace=k8s.io Mar 14 00:17:54.963293 containerd[1618]: time="2026-03-14T00:17:54.963205373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:55.121290 systemd[1]: run-containerd-runc-k8s.io-b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031-runc.4dJbHI.mount: Deactivated successfully. Mar 14 00:17:55.121443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b26d71393f1f2963f9f9a8b5efc54b84999af6be06c92fc74baa6a11a10b3031-rootfs.mount: Deactivated successfully. Mar 14 00:17:55.578120 kubelet[2791]: I0314 00:17:55.577187 2791 setters.go:618] "Node became not ready" node="ci-4081-3-6-n-0dd818c04e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:17:55Z","lastTransitionTime":"2026-03-14T00:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:17:55.832688 containerd[1618]: time="2026-03-14T00:17:55.832053946Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:17:55.851303 containerd[1618]: time="2026-03-14T00:17:55.851260401Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c\"" Mar 14 00:17:55.854683 containerd[1618]: time="2026-03-14T00:17:55.854641770Z" level=info msg="StartContainer for \"8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c\"" Mar 14 00:17:55.935252 containerd[1618]: time="2026-03-14T00:17:55.933398595Z" level=info msg="StartContainer for \"8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c\" returns successfully" Mar 14 00:17:55.957799 containerd[1618]: time="2026-03-14T00:17:55.957562383Z" level=info msg="shim disconnected" id=8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c namespace=k8s.io Mar 14 00:17:55.957799 containerd[1618]: time="2026-03-14T00:17:55.957639384Z" level=warning msg="cleaning up after shim disconnected" id=8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c namespace=k8s.io Mar 14 00:17:55.957799 containerd[1618]: time="2026-03-14T00:17:55.957648784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:56.120895 systemd[1]: run-containerd-runc-k8s.io-8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c-runc.XyWg5X.mount: Deactivated successfully. Mar 14 00:17:56.121858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5af6ce820d4c0f0843422b3e2c11ce9c15ff6bbb897094201441c651a9784c-rootfs.mount: Deactivated successfully. Mar 14 00:17:56.235985 kubelet[2791]: E0314 00:17:56.235748 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7fvph" podUID="cf371d76-9578-4630-b5a6-64a41afb6007" Mar 14 00:17:56.347609 kubelet[2791]: E0314 00:17:56.347369 2791 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:17:56.835722 containerd[1618]: time="2026-03-14T00:17:56.834238163Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:17:56.857198 containerd[1618]: time="2026-03-14T00:17:56.857036519Z" level=info msg="CreateContainer within sandbox \"e8ecdecae4316699f2de5612dd042a9201bc7038b730b80f0058835a7b1260ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ee98c9738d5742633c7852a0ee7dccb6a0d8e2d61e5e1dd4cd758950961cc49\"" Mar 14 00:17:56.859006 containerd[1618]: time="2026-03-14T00:17:56.858020242Z" level=info msg="StartContainer for \"3ee98c9738d5742633c7852a0ee7dccb6a0d8e2d61e5e1dd4cd758950961cc49\"" Mar 14 00:17:56.942702 containerd[1618]: time="2026-03-14T00:17:56.941810081Z" level=info msg="StartContainer for \"3ee98c9738d5742633c7852a0ee7dccb6a0d8e2d61e5e1dd4cd758950961cc49\" returns successfully" Mar 14 00:17:57.273667 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 14 00:17:57.855492 kubelet[2791]: I0314 00:17:57.855387 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-62p4t" podStartSLOduration=5.855370253 podStartE2EDuration="5.855370253s" podCreationTimestamp="2026-03-14 00:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:57.850693075 +0000 UTC m=+196.753859372" watchObservedRunningTime="2026-03-14 00:17:57.855370253 +0000 UTC m=+196.758536510" Mar 14 00:17:58.236111 kubelet[2791]: E0314 00:17:58.234964 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7fvph" podUID="cf371d76-9578-4630-b5a6-64a41afb6007" Mar 14 00:17:58.832536 update_engine[1590]: I20260314 00:17:58.831641 1590 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:17:58.832536 update_engine[1590]: I20260314 00:17:58.831939 1590 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:17:58.832536 update_engine[1590]: I20260314 00:17:58.832211 1590 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:17:58.834447 update_engine[1590]: E20260314 00:17:58.834354 1590 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:17:58.834447 update_engine[1590]: I20260314 00:17:58.834421 1590 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:18:00.223707 systemd-networkd[1242]: lxc_health: Link UP Mar 14 00:18:00.230285 systemd-networkd[1242]: lxc_health: Gained carrier Mar 14 00:18:00.234784 kubelet[2791]: E0314 00:18:00.234411 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7fvph" podUID="cf371d76-9578-4630-b5a6-64a41afb6007" Mar 14 00:18:01.579251 systemd[1]: run-containerd-runc-k8s.io-3ee98c9738d5742633c7852a0ee7dccb6a0d8e2d61e5e1dd4cd758950961cc49-runc.Lntd0W.mount: Deactivated successfully. Mar 14 00:18:02.228757 systemd-networkd[1242]: lxc_health: Gained IPv6LL Mar 14 00:18:03.806205 kubelet[2791]: E0314 00:18:03.804956 2791 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53766->127.0.0.1:36749: write tcp 127.0.0.1:53766->127.0.0.1:36749: write: broken pipe Mar 14 00:18:06.050706 sshd[4725]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:06.055058 systemd[1]: sshd@23-159.69.119.127:22-68.220.241.50:49430.service: Deactivated successfully. Mar 14 00:18:06.059867 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:18:06.062124 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:18:06.063282 systemd-logind[1583]: Removed session 24. Mar 14 00:18:08.832681 update_engine[1590]: I20260314 00:18:08.832120 1590 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:18:08.832681 update_engine[1590]: I20260314 00:18:08.832525 1590 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:18:08.833525 update_engine[1590]: I20260314 00:18:08.832923 1590 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:18:08.833932 update_engine[1590]: E20260314 00:18:08.833868 1590 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:18:08.834037 update_engine[1590]: I20260314 00:18:08.833945 1590 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:18:08.834037 update_engine[1590]: I20260314 00:18:08.833961 1590 omaha_request_action.cc:617] Omaha request response: Mar 14 00:18:08.834159 update_engine[1590]: E20260314 00:18:08.834085 1590 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:18:08.834159 update_engine[1590]: I20260314 00:18:08.834111 1590 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:18:08.834159 update_engine[1590]: I20260314 00:18:08.834120 1590 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:18:08.834159 update_engine[1590]: I20260314 00:18:08.834129 1590 update_attempter.cc:306] Processing Done. Mar 14 00:18:08.834159 update_engine[1590]: E20260314 00:18:08.834149 1590 update_attempter.cc:619] Update failed. Mar 14 00:18:08.834159 update_engine[1590]: I20260314 00:18:08.834157 1590 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834167 1590 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834177 1590 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834266 1590 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834296 1590 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834306 1590 omaha_request_action.cc:272] Request: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: Mar 14 00:18:08.834420 update_engine[1590]: I20260314 00:18:08.834316 1590 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:18:08.834912 update_engine[1590]: I20260314 00:18:08.834497 1590 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:18:08.834912 update_engine[1590]: I20260314 00:18:08.834731 1590 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:18:08.835364 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:18:08.835916 update_engine[1590]: E20260314 00:18:08.835515 1590 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835599 1590 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835613 1590 omaha_request_action.cc:617] Omaha request response: Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835623 1590 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835630 1590 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835639 1590 update_attempter.cc:306] Processing Done. Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835647 1590 update_attempter.cc:310] Error event sent. Mar 14 00:18:08.835916 update_engine[1590]: I20260314 00:18:08.835660 1590 update_check_scheduler.cc:74] Next update check in 40m33s Mar 14 00:18:08.836641 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:18:38.007610 kubelet[2791]: E0314 00:18:38.007354 2791 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53986->10.0.0.2:2379: read: connection timed out" Mar 14 00:18:38.038424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8-rootfs.mount: Deactivated successfully. Mar 14 00:18:38.045930 containerd[1618]: time="2026-03-14T00:18:38.045821124Z" level=info msg="shim disconnected" id=3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8 namespace=k8s.io Mar 14 00:18:38.045930 containerd[1618]: time="2026-03-14T00:18:38.045892886Z" level=warning msg="cleaning up after shim disconnected" id=3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8 namespace=k8s.io Mar 14 00:18:38.045930 containerd[1618]: time="2026-03-14T00:18:38.045908206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:38.693929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b-rootfs.mount: Deactivated successfully. Mar 14 00:18:38.697594 containerd[1618]: time="2026-03-14T00:18:38.697435344Z" level=info msg="shim disconnected" id=2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b namespace=k8s.io Mar 14 00:18:38.697594 containerd[1618]: time="2026-03-14T00:18:38.697494945Z" level=warning msg="cleaning up after shim disconnected" id=2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b namespace=k8s.io Mar 14 00:18:38.697594 containerd[1618]: time="2026-03-14T00:18:38.697505265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:18:38.949668 kubelet[2791]: I0314 00:18:38.949457 2791 scope.go:117] "RemoveContainer" containerID="2965d6b81a8880569b7ff611ab1946dd07b2fb5acd7fe7c663f284f3770ac14b" Mar 14 00:18:38.952980 kubelet[2791]: I0314 00:18:38.952755 2791 scope.go:117] "RemoveContainer" containerID="3f6cd66eb7530c02fd1366c969f405d32b3cfad1458bf0f36490b3f1d907ecb8" Mar 14 00:18:38.954196 containerd[1618]: time="2026-03-14T00:18:38.953762993Z" level=info msg="CreateContainer within sandbox \"38557b2dd0fd671a8d2faf99f8f2d8a6d844f55a483520e07ffbd60c980be0e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:18:38.955730 containerd[1618]: time="2026-03-14T00:18:38.955690628Z" level=info msg="CreateContainer within sandbox \"95cbdb21319314e2948b819ae8b054044c043028ac616d7b7743a3885ec44123\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:18:38.973638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293527812.mount: Deactivated successfully. Mar 14 00:18:38.975767 containerd[1618]: time="2026-03-14T00:18:38.974766694Z" level=info msg="CreateContainer within sandbox \"38557b2dd0fd671a8d2faf99f8f2d8a6d844f55a483520e07ffbd60c980be0e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"aa93bc33538d340794c709153ba10d568c459f82c4195ae92beafc399315b52d\"" Mar 14 00:18:38.976042 containerd[1618]: time="2026-03-14T00:18:38.976018517Z" level=info msg="StartContainer for \"aa93bc33538d340794c709153ba10d568c459f82c4195ae92beafc399315b52d\"" Mar 14 00:18:38.979455 containerd[1618]: time="2026-03-14T00:18:38.979258136Z" level=info msg="CreateContainer within sandbox \"95cbdb21319314e2948b819ae8b054044c043028ac616d7b7743a3885ec44123\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"895d5134dd31b31632540de736cb6a732ad8e3de46dd19ace5af0d8c258eb699\"" Mar 14 00:18:38.980842 containerd[1618]: time="2026-03-14T00:18:38.980748883Z" level=info msg="StartContainer for \"895d5134dd31b31632540de736cb6a732ad8e3de46dd19ace5af0d8c258eb699\"" Mar 14 00:18:39.062947 containerd[1618]: time="2026-03-14T00:18:39.062071853Z" level=info msg="StartContainer for \"895d5134dd31b31632540de736cb6a732ad8e3de46dd19ace5af0d8c258eb699\" returns successfully" Mar 14 00:18:39.070670 containerd[1618]: time="2026-03-14T00:18:39.069227265Z" level=info msg="StartContainer for \"aa93bc33538d340794c709153ba10d568c459f82c4195ae92beafc399315b52d\" returns successfully" Mar 14 00:18:41.228089 containerd[1618]: time="2026-03-14T00:18:41.227929946Z" level=info msg="StopPodSandbox for \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\"" Mar 14 00:18:41.228089 containerd[1618]: time="2026-03-14T00:18:41.228018628Z" level=info msg="TearDown network for sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" successfully" Mar 14 00:18:41.228089 containerd[1618]: time="2026-03-14T00:18:41.228029708Z" level=info msg="StopPodSandbox for \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" returns successfully" Mar 14 00:18:41.229590 containerd[1618]: time="2026-03-14T00:18:41.228701561Z" level=info msg="RemovePodSandbox for \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\"" Mar 14 00:18:41.229590 containerd[1618]: time="2026-03-14T00:18:41.228762882Z" level=info msg="Forcibly stopping sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\"" Mar 14 00:18:41.229590 containerd[1618]: time="2026-03-14T00:18:41.228874244Z" level=info msg="TearDown network for sandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" successfully" Mar 14 00:18:41.233951 containerd[1618]: time="2026-03-14T00:18:41.233904019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:18:41.234039 containerd[1618]: time="2026-03-14T00:18:41.233973180Z" level=info msg="RemovePodSandbox \"86b06b133935520627dc0092d0f017119aa9ae324c45bd054e0ce2ba4a41eb89\" returns successfully" Mar 14 00:18:41.236020 containerd[1618]: time="2026-03-14T00:18:41.235867016Z" level=info msg="StopPodSandbox for \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\"" Mar 14 00:18:41.236020 containerd[1618]: time="2026-03-14T00:18:41.235959057Z" level=info msg="TearDown network for sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" successfully" Mar 14 00:18:41.236020 containerd[1618]: time="2026-03-14T00:18:41.235969858Z" level=info msg="StopPodSandbox for \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" returns successfully" Mar 14 00:18:41.237814 containerd[1618]: time="2026-03-14T00:18:41.237708290Z" level=info msg="RemovePodSandbox for \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\"" Mar 14 00:18:41.237814 containerd[1618]: time="2026-03-14T00:18:41.237743371Z" level=info msg="Forcibly stopping sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\"" Mar 14 00:18:41.237814 containerd[1618]: time="2026-03-14T00:18:41.237795812Z" level=info msg="TearDown network for sandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" successfully" Mar 14 00:18:41.246609 containerd[1618]: time="2026-03-14T00:18:41.246433375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:18:41.246609 containerd[1618]: time="2026-03-14T00:18:41.246526377Z" level=info msg="RemovePodSandbox \"5e09c8eaf7a5929db0dc1bbca29b9241f5361579062400865dbcc1f0a8448790\" returns successfully" Mar 14 00:18:41.462992 kubelet[2791]: E0314 00:18:41.462459 2791 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53790->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-0dd818c04e.189c8d1dcd37462b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-0dd818c04e,UID:a2353993256d95befefbbc352f9e7ce0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-0dd818c04e,},FirstTimestamp:2026-03-14 00:18:32.413652523 +0000 UTC m=+231.316818780,LastTimestamp:2026-03-14 00:18:32.413652523 +0000 UTC m=+231.316818780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-0dd818c04e,}"