Mar 21 12:34:43.879377 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 21 12:34:43.879397 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Mar 21 10:53:54 -00 2025 Mar 21 12:34:43.879407 kernel: KASLR enabled Mar 21 12:34:43.879420 kernel: efi: EFI v2.7 by EDK II Mar 21 12:34:43.879428 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 21 12:34:43.879434 kernel: random: crng init done Mar 21 12:34:43.879441 kernel: secureboot: Secure boot disabled Mar 21 12:34:43.879446 kernel: ACPI: Early table checksum verification disabled Mar 21 12:34:43.879452 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 21 12:34:43.879460 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 21 12:34:43.879466 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879472 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879477 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879483 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879490 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879497 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879504 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879510 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879516 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:34:43.879522 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 21 12:34:43.879527 kernel: NUMA: Failed to initialise from firmware Mar 21 12:34:43.879534 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:34:43.879540 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Mar 21 12:34:43.879545 kernel: Zone ranges: Mar 21 12:34:43.879551 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:34:43.879558 kernel: DMA32 empty Mar 21 12:34:43.879564 kernel: Normal empty Mar 21 12:34:43.879570 kernel: Movable zone start for each node Mar 21 12:34:43.879576 kernel: Early memory node ranges Mar 21 12:34:43.879582 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 21 12:34:43.879588 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 21 12:34:43.879595 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 21 12:34:43.879600 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 21 12:34:43.879606 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 21 12:34:43.879612 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 21 12:34:43.879618 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 21 12:34:43.879624 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 21 12:34:43.879631 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 21 12:34:43.879637 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:34:43.879643 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 21 12:34:43.879652 kernel: psci: probing for conduit method from ACPI. Mar 21 12:34:43.879658 kernel: psci: PSCIv1.1 detected in firmware. Mar 21 12:34:43.879664 kernel: psci: Using standard PSCI v0.2 function IDs Mar 21 12:34:43.879672 kernel: psci: Trusted OS migration not required Mar 21 12:34:43.879678 kernel: psci: SMC Calling Convention v1.1 Mar 21 12:34:43.879685 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 21 12:34:43.879691 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 21 12:34:43.879698 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 21 12:34:43.879704 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 21 12:34:43.879710 kernel: Detected PIPT I-cache on CPU0 Mar 21 12:34:43.879717 kernel: CPU features: detected: GIC system register CPU interface Mar 21 12:34:43.879723 kernel: CPU features: detected: Hardware dirty bit management Mar 21 12:34:43.879729 kernel: CPU features: detected: Spectre-v4 Mar 21 12:34:43.879737 kernel: CPU features: detected: Spectre-BHB Mar 21 12:34:43.879788 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 21 12:34:43.879797 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 21 12:34:43.879804 kernel: CPU features: detected: ARM erratum 1418040 Mar 21 12:34:43.879811 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 21 12:34:43.879818 kernel: alternatives: applying boot alternatives Mar 21 12:34:43.879825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=93cb17f03b776356c0810b716fff0c7c2012572bbe395c702f6873d17674684f Mar 21 12:34:43.879832 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 12:34:43.879839 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 12:34:43.879846 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 12:34:43.879852 kernel: Fallback order for Node 0: 0 Mar 21 12:34:43.879861 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 21 12:34:43.879868 kernel: Policy zone: DMA Mar 21 12:34:43.879874 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 12:34:43.879880 kernel: software IO TLB: area num 4. Mar 21 12:34:43.879887 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 21 12:34:43.879894 kernel: Memory: 2387408K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184880K reserved, 0K cma-reserved) Mar 21 12:34:43.879901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 21 12:34:43.879907 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 12:34:43.879914 kernel: rcu: RCU event tracing is enabled. Mar 21 12:34:43.879921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 21 12:34:43.879927 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 12:34:43.879934 kernel: Tracing variant of Tasks RCU enabled. Mar 21 12:34:43.879942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 12:34:43.879948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 21 12:34:43.879955 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 21 12:34:43.879961 kernel: GICv3: 256 SPIs implemented Mar 21 12:34:43.879967 kernel: GICv3: 0 Extended SPIs implemented Mar 21 12:34:43.879974 kernel: Root IRQ handler: gic_handle_irq Mar 21 12:34:43.879980 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 21 12:34:43.879986 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 21 12:34:43.879993 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 21 12:34:43.880000 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 21 12:34:43.880006 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 21 12:34:43.880014 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 21 12:34:43.880020 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 21 12:34:43.880027 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 12:34:43.880033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:34:43.880039 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 21 12:34:43.880046 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 21 12:34:43.880052 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 21 12:34:43.880059 kernel: arm-pv: using stolen time PV Mar 21 12:34:43.880066 kernel: Console: colour dummy device 80x25 Mar 21 12:34:43.880072 kernel: ACPI: Core revision 20230628 Mar 21 12:34:43.880079 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 21 12:34:43.880087 kernel: pid_max: default: 32768 minimum: 301 Mar 21 12:34:43.880093 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 12:34:43.880100 kernel: landlock: Up and running. Mar 21 12:34:43.880107 kernel: SELinux: Initializing. Mar 21 12:34:43.880113 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:34:43.880120 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:34:43.880126 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:34:43.880133 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:34:43.880139 kernel: rcu: Hierarchical SRCU implementation. Mar 21 12:34:43.880147 kernel: rcu: Max phase no-delay instances is 400. Mar 21 12:34:43.880154 kernel: Platform MSI: ITS@0x8080000 domain created Mar 21 12:34:43.880160 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 21 12:34:43.880167 kernel: Remapping and enabling EFI services. Mar 21 12:34:43.880173 kernel: smp: Bringing up secondary CPUs ... Mar 21 12:34:43.880180 kernel: Detected PIPT I-cache on CPU1 Mar 21 12:34:43.880187 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 21 12:34:43.880193 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 21 12:34:43.880200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:34:43.880207 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 21 12:34:43.880214 kernel: Detected PIPT I-cache on CPU2 Mar 21 12:34:43.880225 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 21 12:34:43.880234 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 21 12:34:43.880241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:34:43.880247 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 21 12:34:43.880254 kernel: Detected PIPT I-cache on CPU3 Mar 21 12:34:43.880261 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 21 12:34:43.880268 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 21 12:34:43.880277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:34:43.880283 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 21 12:34:43.880290 kernel: smp: Brought up 1 node, 4 CPUs Mar 21 12:34:43.880297 kernel: SMP: Total of 4 processors activated. Mar 21 12:34:43.880304 kernel: CPU features: detected: 32-bit EL0 Support Mar 21 12:34:43.880311 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 21 12:34:43.880318 kernel: CPU features: detected: Common not Private translations Mar 21 12:34:43.880325 kernel: CPU features: detected: CRC32 instructions Mar 21 12:34:43.880333 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 21 12:34:43.880340 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 21 12:34:43.880347 kernel: CPU features: detected: LSE atomic instructions Mar 21 12:34:43.880354 kernel: CPU features: detected: Privileged Access Never Mar 21 12:34:43.880361 kernel: CPU features: detected: RAS Extension Support Mar 21 12:34:43.880367 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 21 12:34:43.880374 kernel: CPU: All CPU(s) started at EL1 Mar 21 12:34:43.880381 kernel: alternatives: applying system-wide alternatives Mar 21 12:34:43.880388 kernel: devtmpfs: initialized Mar 21 12:34:43.880395 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 12:34:43.880403 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 21 12:34:43.880410 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 12:34:43.880423 kernel: SMBIOS 3.0.0 present. Mar 21 12:34:43.880430 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 21 12:34:43.880437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 12:34:43.880444 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 21 12:34:43.880452 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 21 12:34:43.880459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 21 12:34:43.880467 kernel: audit: initializing netlink subsys (disabled) Mar 21 12:34:43.880474 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 21 12:34:43.880481 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 12:34:43.880488 kernel: cpuidle: using governor menu Mar 21 12:34:43.880495 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 21 12:34:43.880502 kernel: ASID allocator initialised with 32768 entries Mar 21 12:34:43.880509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 12:34:43.880516 kernel: Serial: AMBA PL011 UART driver Mar 21 12:34:43.880523 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 21 12:34:43.880531 kernel: Modules: 0 pages in range for non-PLT usage Mar 21 12:34:43.880538 kernel: Modules: 509248 pages in range for PLT usage Mar 21 12:34:43.880545 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 21 12:34:43.880552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 21 12:34:43.880559 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 21 12:34:43.880566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 21 12:34:43.880573 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 12:34:43.880580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 12:34:43.880586 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 21 12:34:43.880593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 21 12:34:43.880602 kernel: ACPI: Added _OSI(Module Device) Mar 21 12:34:43.880608 kernel: ACPI: Added _OSI(Processor Device) Mar 21 12:34:43.880615 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 12:34:43.880622 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 12:34:43.880629 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 12:34:43.880636 kernel: ACPI: Interpreter enabled Mar 21 12:34:43.880643 kernel: ACPI: Using GIC for interrupt routing Mar 21 12:34:43.880650 kernel: ACPI: MCFG table detected, 1 entries Mar 21 12:34:43.880657 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 21 12:34:43.880664 kernel: printk: console [ttyAMA0] enabled Mar 21 12:34:43.880672 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 12:34:43.880819 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 21 12:34:43.880897 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 21 12:34:43.880964 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 21 12:34:43.881028 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 21 12:34:43.881091 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 21 12:34:43.881102 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 21 12:34:43.881109 kernel: PCI host bridge to bus 0000:00 Mar 21 12:34:43.881177 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 21 12:34:43.881236 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 21 12:34:43.881294 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 21 12:34:43.881352 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 12:34:43.881440 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 21 12:34:43.881521 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 12:34:43.881589 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 21 12:34:43.881654 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 21 12:34:43.881720 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 21 12:34:43.881829 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 21 12:34:43.881899 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 21 12:34:43.881966 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 21 12:34:43.882030 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 21 12:34:43.882088 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 21 12:34:43.882147 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 21 12:34:43.882156 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 21 12:34:43.882163 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 21 12:34:43.882170 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 21 12:34:43.882177 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 21 12:34:43.882186 kernel: iommu: Default domain type: Translated Mar 21 12:34:43.882193 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 21 12:34:43.882200 kernel: efivars: Registered efivars operations Mar 21 12:34:43.882207 kernel: vgaarb: loaded Mar 21 12:34:43.882214 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 21 12:34:43.882221 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 12:34:43.882228 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 12:34:43.882234 kernel: pnp: PnP ACPI init Mar 21 12:34:43.882311 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 21 12:34:43.882322 kernel: pnp: PnP ACPI: found 1 devices Mar 21 12:34:43.882329 kernel: NET: Registered PF_INET protocol family Mar 21 12:34:43.882336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 12:34:43.882343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 12:34:43.882350 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 12:34:43.882357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 12:34:43.882364 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 12:34:43.882371 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 12:34:43.882378 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:34:43.882386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:34:43.882393 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 12:34:43.882400 kernel: PCI: CLS 0 bytes, default 64 Mar 21 12:34:43.882407 kernel: kvm [1]: HYP mode not available Mar 21 12:34:43.882422 kernel: Initialise system trusted keyrings Mar 21 12:34:43.882430 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 12:34:43.882437 kernel: Key type asymmetric registered Mar 21 12:34:43.882444 kernel: Asymmetric key parser 'x509' registered Mar 21 12:34:43.882450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 21 12:34:43.882459 kernel: io scheduler mq-deadline registered Mar 21 12:34:43.882466 kernel: io scheduler kyber registered Mar 21 12:34:43.882473 kernel: io scheduler bfq registered Mar 21 12:34:43.882480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 21 12:34:43.882487 kernel: ACPI: button: Power Button [PWRB] Mar 21 12:34:43.882494 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 21 12:34:43.882563 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 21 12:34:43.882572 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 12:34:43.882579 kernel: thunder_xcv, ver 1.0 Mar 21 12:34:43.882587 kernel: thunder_bgx, ver 1.0 Mar 21 12:34:43.882594 kernel: nicpf, ver 1.0 Mar 21 12:34:43.882601 kernel: nicvf, ver 1.0 Mar 21 12:34:43.882674 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 21 12:34:43.882739 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-21T12:34:43 UTC (1742560483) Mar 21 12:34:43.882761 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 21 12:34:43.882768 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 21 12:34:43.882775 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 21 12:34:43.882785 kernel: watchdog: Hard watchdog permanently disabled Mar 21 12:34:43.882791 kernel: NET: Registered PF_INET6 protocol family Mar 21 12:34:43.882798 kernel: Segment Routing with IPv6 Mar 21 12:34:43.882805 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 12:34:43.882812 kernel: NET: Registered PF_PACKET protocol family Mar 21 12:34:43.882819 kernel: Key type dns_resolver registered Mar 21 12:34:43.882826 kernel: registered taskstats version 1 Mar 21 12:34:43.882832 kernel: Loading compiled-in X.509 certificates Mar 21 12:34:43.882839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 5eb113f0b3321dedaccf2566eff1e4f54032526e' Mar 21 12:34:43.882848 kernel: Key type .fscrypt registered Mar 21 12:34:43.882854 kernel: Key type fscrypt-provisioning registered Mar 21 12:34:43.882861 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 12:34:43.882868 kernel: ima: Allocated hash algorithm: sha1 Mar 21 12:34:43.882875 kernel: ima: No architecture policies found Mar 21 12:34:43.882882 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 21 12:34:43.882889 kernel: clk: Disabling unused clocks Mar 21 12:34:43.882896 kernel: Freeing unused kernel memory: 38464K Mar 21 12:34:43.882903 kernel: Run /init as init process Mar 21 12:34:43.882911 kernel: with arguments: Mar 21 12:34:43.882918 kernel: /init Mar 21 12:34:43.882924 kernel: with environment: Mar 21 12:34:43.882931 kernel: HOME=/ Mar 21 12:34:43.882937 kernel: TERM=linux Mar 21 12:34:43.882944 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 12:34:43.882952 systemd[1]: Successfully made /usr/ read-only. Mar 21 12:34:43.882961 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:34:43.882970 systemd[1]: Detected virtualization kvm. Mar 21 12:34:43.882978 systemd[1]: Detected architecture arm64. Mar 21 12:34:43.882985 systemd[1]: Running in initrd. Mar 21 12:34:43.882992 systemd[1]: No hostname configured, using default hostname. Mar 21 12:34:43.883000 systemd[1]: Hostname set to . Mar 21 12:34:43.883007 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:34:43.883014 systemd[1]: Queued start job for default target initrd.target. Mar 21 12:34:43.883022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:34:43.883030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:34:43.883038 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 12:34:43.883046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:34:43.883054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 12:34:43.883062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 12:34:43.883071 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 12:34:43.883079 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 12:34:43.883087 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:34:43.883094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:34:43.883102 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:34:43.883109 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:34:43.883116 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:34:43.883124 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:34:43.883131 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:34:43.883139 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:34:43.883147 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 12:34:43.883155 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 12:34:43.883163 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:34:43.883170 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:34:43.883178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:34:43.883185 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:34:43.883193 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 12:34:43.883200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:34:43.883208 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 12:34:43.883216 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 12:34:43.883223 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:34:43.883231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:34:43.883238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:34:43.883246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:34:43.883253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 12:34:43.883262 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 12:34:43.883270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:34:43.883278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:34:43.883301 systemd-journald[237]: Collecting audit messages is disabled. Mar 21 12:34:43.883321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:34:43.883329 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:34:43.883337 systemd-journald[237]: Journal started Mar 21 12:34:43.883355 systemd-journald[237]: Runtime Journal (/run/log/journal/9d9b09c407bb4d61b2c42b803e92412c) is 5.9M, max 47.3M, 41.4M free. Mar 21 12:34:43.875553 systemd-modules-load[238]: Inserted module 'overlay' Mar 21 12:34:43.887610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:34:43.892770 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 12:34:43.892800 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:34:43.892813 kernel: Bridge firewalling registered Mar 21 12:34:43.892613 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 21 12:34:43.894214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:34:43.897428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:34:43.899875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:34:43.902316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:34:43.908384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:34:43.910604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:34:43.912812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:34:43.913701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:34:43.915795 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 12:34:43.931644 dracut-cmdline[280]: dracut-dracut-053 Mar 21 12:34:43.933981 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=93cb17f03b776356c0810b716fff0c7c2012572bbe395c702f6873d17674684f Mar 21 12:34:43.949835 systemd-resolved[279]: Positive Trust Anchors: Mar 21 12:34:43.949850 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:34:43.949881 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:34:43.954571 systemd-resolved[279]: Defaulting to hostname 'linux'. Mar 21 12:34:43.959358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:34:43.960200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:34:44.003767 kernel: SCSI subsystem initialized Mar 21 12:34:44.007772 kernel: Loading iSCSI transport class v2.0-870. Mar 21 12:34:44.016789 kernel: iscsi: registered transport (tcp) Mar 21 12:34:44.027765 kernel: iscsi: registered transport (qla4xxx) Mar 21 12:34:44.027781 kernel: QLogic iSCSI HBA Driver Mar 21 12:34:44.065802 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 12:34:44.067616 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 12:34:44.097102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 12:34:44.097136 kernel: device-mapper: uevent: version 1.0.3 Mar 21 12:34:44.097867 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 12:34:44.145778 kernel: raid6: neonx8 gen() 15754 MB/s Mar 21 12:34:44.162769 kernel: raid6: neonx4 gen() 15773 MB/s Mar 21 12:34:44.179756 kernel: raid6: neonx2 gen() 13174 MB/s Mar 21 12:34:44.196764 kernel: raid6: neonx1 gen() 10483 MB/s Mar 21 12:34:44.213763 kernel: raid6: int64x8 gen() 6780 MB/s Mar 21 12:34:44.230763 kernel: raid6: int64x4 gen() 7330 MB/s Mar 21 12:34:44.247757 kernel: raid6: int64x2 gen() 6099 MB/s Mar 21 12:34:44.264763 kernel: raid6: int64x1 gen() 5055 MB/s Mar 21 12:34:44.264788 kernel: raid6: using algorithm neonx4 gen() 15773 MB/s Mar 21 12:34:44.281771 kernel: raid6: .... xor() 12366 MB/s, rmw enabled Mar 21 12:34:44.281795 kernel: raid6: using neon recovery algorithm Mar 21 12:34:44.287001 kernel: xor: measuring software checksum speed Mar 21 12:34:44.287017 kernel: 8regs : 21624 MB/sec Mar 21 12:34:44.287029 kernel: 32regs : 21699 MB/sec Mar 21 12:34:44.287956 kernel: arm64_neon : 27889 MB/sec Mar 21 12:34:44.287984 kernel: xor: using function: arm64_neon (27889 MB/sec) Mar 21 12:34:44.338769 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 12:34:44.348856 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:34:44.351240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:34:44.375145 systemd-udevd[462]: Using default interface naming scheme 'v255'. Mar 21 12:34:44.378790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:34:44.381632 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 12:34:44.410708 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Mar 21 12:34:44.436171 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:34:44.438864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:34:44.491065 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:34:44.494878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 12:34:44.515145 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 12:34:44.516779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:34:44.517628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:34:44.520220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:34:44.524390 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 21 12:34:44.529640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 21 12:34:44.529739 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 12:34:44.529765 kernel: GPT:9289727 != 19775487 Mar 21 12:34:44.529776 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 12:34:44.529785 kernel: GPT:9289727 != 19775487 Mar 21 12:34:44.529793 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 12:34:44.529804 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:34:44.525623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 12:34:44.532188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:34:44.532296 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:34:44.535175 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:34:44.536162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:34:44.542817 kernel: BTRFS: device fsid bdcda679-e2cc-43ec-88ed-d0a5c8807e76 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (525) Mar 21 12:34:44.536289 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:34:44.542857 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:34:44.547048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:34:44.550771 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (521) Mar 21 12:34:44.559089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:34:44.567363 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 12:34:44.568581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:34:44.593181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 12:34:44.599135 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 12:34:44.600045 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 12:34:44.608277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:34:44.609869 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 12:34:44.611575 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:34:44.624463 disk-uuid[553]: Primary Header is updated. Mar 21 12:34:44.624463 disk-uuid[553]: Secondary Entries is updated. Mar 21 12:34:44.624463 disk-uuid[553]: Secondary Header is updated. Mar 21 12:34:44.627765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:34:44.635800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:34:45.644513 disk-uuid[554]: The operation has completed successfully. Mar 21 12:34:45.646057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:34:45.673638 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 12:34:45.673796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 12:34:45.694666 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 12:34:45.708469 sh[574]: Success Mar 21 12:34:45.724765 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 21 12:34:45.751820 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 12:34:45.754441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 12:34:45.771006 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 12:34:45.777422 kernel: BTRFS info (device dm-0): first mount of filesystem bdcda679-e2cc-43ec-88ed-d0a5c8807e76 Mar 21 12:34:45.777472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:34:45.777492 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 12:34:45.777510 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 12:34:45.778016 kernel: BTRFS info (device dm-0): using free space tree Mar 21 12:34:45.782221 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 12:34:45.783290 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 12:34:45.783953 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 12:34:45.787083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 12:34:45.813192 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:34:45.813237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:34:45.813249 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:34:45.815771 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:34:45.819777 kernel: BTRFS info (device vda6): last unmount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:34:45.821907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 12:34:45.823720 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 12:34:45.893870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:34:45.896365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:34:45.921605 ignition[660]: Ignition 2.20.0 Mar 21 12:34:45.921617 ignition[660]: Stage: fetch-offline Mar 21 12:34:45.921647 ignition[660]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:45.921655 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:45.921823 ignition[660]: parsed url from cmdline: "" Mar 21 12:34:45.921826 ignition[660]: no config URL provided Mar 21 12:34:45.921831 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 12:34:45.921839 ignition[660]: no config at "/usr/lib/ignition/user.ign" Mar 21 12:34:45.921861 ignition[660]: op(1): [started] loading QEMU firmware config module Mar 21 12:34:45.921866 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 21 12:34:45.929290 ignition[660]: op(1): [finished] loading QEMU firmware config module Mar 21 12:34:45.940936 systemd-networkd[762]: lo: Link UP Mar 21 12:34:45.940945 systemd-networkd[762]: lo: Gained carrier Mar 21 12:34:45.941734 systemd-networkd[762]: Enumeration completed Mar 21 12:34:45.941962 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:34:45.942160 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:34:45.942164 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:34:45.943015 systemd-networkd[762]: eth0: Link UP Mar 21 12:34:45.943018 systemd-networkd[762]: eth0: Gained carrier Mar 21 12:34:45.943025 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:34:45.943622 systemd[1]: Reached target network.target - Network. Mar 21 12:34:45.966788 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:34:45.976457 ignition[660]: parsing config with SHA512: 91918d9172d7e58066b25cf098a238c13b35cc0dba018e82d68c1244ab2a2648a4c790f9678ebe815b7f807625f0fd1dfa1d924c721809dd8f93e57c7fa95253 Mar 21 12:34:45.981841 unknown[660]: fetched base config from "system" Mar 21 12:34:45.982074 unknown[660]: fetched user config from "qemu" Mar 21 12:34:45.982648 ignition[660]: fetch-offline: fetch-offline passed Mar 21 12:34:45.983025 ignition[660]: Ignition finished successfully Mar 21 12:34:45.984485 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:34:45.985550 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 21 12:34:45.986299 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 12:34:46.010385 ignition[771]: Ignition 2.20.0 Mar 21 12:34:46.010394 ignition[771]: Stage: kargs Mar 21 12:34:46.010553 ignition[771]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:46.010566 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:46.011401 ignition[771]: kargs: kargs passed Mar 21 12:34:46.011454 ignition[771]: Ignition finished successfully Mar 21 12:34:46.013429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 12:34:46.015083 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 12:34:46.033377 ignition[781]: Ignition 2.20.0 Mar 21 12:34:46.033386 ignition[781]: Stage: disks Mar 21 12:34:46.033531 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:46.033541 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:46.034379 ignition[781]: disks: disks passed Mar 21 12:34:46.035727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 12:34:46.034432 ignition[781]: Ignition finished successfully Mar 21 12:34:46.036628 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 12:34:46.037634 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 12:34:46.039041 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:34:46.040199 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:34:46.041521 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:34:46.043726 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 12:34:46.067644 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 21 12:34:46.071054 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 12:34:46.073198 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 12:34:46.129766 kernel: EXT4-fs (vda9): mounted filesystem 3004295c-1fab-4723-a953-2dc6fc131037 r/w with ordered data mode. Quota mode: none. Mar 21 12:34:46.130443 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 12:34:46.131445 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 12:34:46.133319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:34:46.134780 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 12:34:46.135550 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 12:34:46.135603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 12:34:46.135626 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:34:46.148518 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 12:34:46.150510 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 12:34:46.154951 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (800) Mar 21 12:34:46.154970 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:34:46.154980 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:34:46.154989 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:34:46.156769 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:34:46.157053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:34:46.194167 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 12:34:46.197767 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Mar 21 12:34:46.201484 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 12:34:46.205296 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 12:34:46.275139 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 12:34:46.277077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 12:34:46.278450 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 12:34:46.292766 kernel: BTRFS info (device vda6): last unmount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:34:46.313922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 12:34:46.323121 ignition[913]: INFO : Ignition 2.20.0 Mar 21 12:34:46.323121 ignition[913]: INFO : Stage: mount Mar 21 12:34:46.324317 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:46.324317 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:46.324317 ignition[913]: INFO : mount: mount passed Mar 21 12:34:46.324317 ignition[913]: INFO : Ignition finished successfully Mar 21 12:34:46.325325 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 12:34:46.327278 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 12:34:46.919705 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 12:34:46.921166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:34:46.938934 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (926) Mar 21 12:34:46.938964 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:34:46.938975 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:34:46.940191 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:34:46.942780 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:34:46.943224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:34:46.966204 ignition[943]: INFO : Ignition 2.20.0 Mar 21 12:34:46.966204 ignition[943]: INFO : Stage: files Mar 21 12:34:46.967396 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:46.967396 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:46.967396 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Mar 21 12:34:46.969915 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 12:34:46.969915 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 12:34:46.971863 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 12:34:46.971863 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 12:34:46.971863 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 12:34:46.971863 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 21 12:34:46.971863 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 21 12:34:46.970507 unknown[943]: wrote ssh authorized keys file for user: core Mar 21 12:34:47.717956 systemd-networkd[762]: eth0: Gained IPv6LL Mar 21 12:34:47.957255 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 12:34:50.970282 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 21 12:34:50.970282 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 12:34:50.973259 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 21 12:34:51.350144 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 21 12:34:51.395921 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 12:34:51.395921 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:34:51.399434 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 21 12:34:51.696093 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 21 12:34:52.234986 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 21 12:34:52.237346 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 21 12:34:52.252377 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:34:52.254926 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:34:52.256550 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 21 12:34:52.256550 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 21 12:34:52.256550 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 12:34:52.256550 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:34:52.256550 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:34:52.256550 ignition[943]: INFO : files: files passed Mar 21 12:34:52.256550 ignition[943]: INFO : Ignition finished successfully Mar 21 12:34:52.257815 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 12:34:52.260867 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 12:34:52.262902 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 12:34:52.273842 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 12:34:52.273927 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 12:34:52.277222 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Mar 21 12:34:52.278553 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:34:52.278553 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:34:52.282459 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:34:52.279244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:34:52.281315 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 12:34:52.284099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 12:34:52.326890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 12:34:52.327000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 12:34:52.329297 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 12:34:52.331143 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 12:34:52.332996 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 12:34:52.333796 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 12:34:52.356805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:34:52.359260 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 12:34:52.381476 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:34:52.382726 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:34:52.384741 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 12:34:52.386523 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 12:34:52.386650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:34:52.389165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 12:34:52.391108 systemd[1]: Stopped target basic.target - Basic System. Mar 21 12:34:52.392738 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 12:34:52.394465 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:34:52.396475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 12:34:52.398528 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 12:34:52.400471 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:34:52.402511 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 12:34:52.404538 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 12:34:52.406252 systemd[1]: Stopped target swap.target - Swaps. Mar 21 12:34:52.407732 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 12:34:52.407878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:34:52.410162 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:34:52.412069 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:34:52.413910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 12:34:52.417832 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:34:52.418823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 12:34:52.418936 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 12:34:52.421469 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 12:34:52.421595 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:34:52.423390 systemd[1]: Stopped target paths.target - Path Units. Mar 21 12:34:52.424812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 12:34:52.428806 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:34:52.429852 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 12:34:52.431800 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 12:34:52.433232 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 12:34:52.433320 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:34:52.434678 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 12:34:52.434769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:34:52.436141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 12:34:52.436252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:34:52.437798 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 12:34:52.437902 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 12:34:52.440113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 12:34:52.442598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 12:34:52.443707 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 12:34:52.443843 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:34:52.445573 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 12:34:52.445682 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:34:52.454971 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 12:34:52.455070 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 12:34:52.463232 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 12:34:52.467048 ignition[1000]: INFO : Ignition 2.20.0 Mar 21 12:34:52.467048 ignition[1000]: INFO : Stage: umount Mar 21 12:34:52.467048 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:34:52.467048 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:34:52.466177 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 12:34:52.472818 ignition[1000]: INFO : umount: umount passed Mar 21 12:34:52.472818 ignition[1000]: INFO : Ignition finished successfully Mar 21 12:34:52.466268 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 12:34:52.470522 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 12:34:52.470618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 12:34:52.472397 systemd[1]: Stopped target network.target - Network. Mar 21 12:34:52.473691 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 12:34:52.473788 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 12:34:52.475345 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 12:34:52.475419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 12:34:52.476921 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 12:34:52.476969 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 12:34:52.478480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 12:34:52.478523 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 12:34:52.480106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 12:34:52.480156 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 12:34:52.481906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 12:34:52.483427 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 12:34:52.488424 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 12:34:52.488539 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 12:34:52.491521 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 12:34:52.491718 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 12:34:52.491848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 12:34:52.494873 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 12:34:52.495451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 12:34:52.495515 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:34:52.497636 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 12:34:52.499173 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 12:34:52.499253 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:34:52.501309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:34:52.501361 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:34:52.503584 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 12:34:52.503630 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 12:34:52.505641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 12:34:52.505693 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:34:52.508238 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:34:52.510735 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:34:52.510837 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:34:52.529947 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 12:34:52.530081 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:34:52.532361 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 12:34:52.532417 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 12:34:52.533881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 12:34:52.533913 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:34:52.535556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 12:34:52.535609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:34:52.538065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 12:34:52.538119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 12:34:52.540455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:34:52.540507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:34:52.543726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 12:34:52.545425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 12:34:52.545492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:34:52.548277 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 21 12:34:52.548324 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:34:52.550241 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 12:34:52.550290 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:34:52.552200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:34:52.552250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:34:52.555825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 21 12:34:52.555886 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:34:52.564922 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 12:34:52.565010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 12:34:52.569722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 12:34:52.569867 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 12:34:52.571657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 12:34:52.573723 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 12:34:52.593305 systemd[1]: Switching root. Mar 21 12:34:52.619681 systemd-journald[237]: Journal stopped Mar 21 12:34:53.431152 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 21 12:34:53.431215 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 12:34:53.431231 kernel: SELinux: policy capability open_perms=1 Mar 21 12:34:53.431241 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 12:34:53.431252 kernel: SELinux: policy capability always_check_network=0 Mar 21 12:34:53.431265 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 12:34:53.431274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 12:34:53.431296 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 12:34:53.431305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 12:34:53.431315 kernel: audit: type=1403 audit(1742560492.813:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 12:34:53.431326 systemd[1]: Successfully loaded SELinux policy in 33.194ms. Mar 21 12:34:53.431343 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.064ms. Mar 21 12:34:53.431354 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:34:53.431365 systemd[1]: Detected virtualization kvm. Mar 21 12:34:53.431376 systemd[1]: Detected architecture arm64. Mar 21 12:34:53.431397 systemd[1]: Detected first boot. Mar 21 12:34:53.431409 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:34:53.431419 zram_generator::config[1047]: No configuration found. Mar 21 12:34:53.431431 kernel: NET: Registered PF_VSOCK protocol family Mar 21 12:34:53.431440 systemd[1]: Populated /etc with preset unit settings. Mar 21 12:34:53.431454 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 12:34:53.431465 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 12:34:53.431475 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 12:34:53.431485 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 12:34:53.431505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 12:34:53.431516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 12:34:53.431526 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 12:34:53.431536 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 12:34:53.431548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 12:34:53.431558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 12:34:53.431569 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 12:34:53.431579 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 12:34:53.431589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:34:53.431599 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:34:53.431610 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 12:34:53.431621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 12:34:53.431631 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 12:34:53.431643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:34:53.431655 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 21 12:34:53.431666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:34:53.431677 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 12:34:53.431687 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 12:34:53.431697 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 12:34:53.431708 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 12:34:53.431720 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:34:53.431731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:34:53.431750 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:34:53.431765 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:34:53.431789 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 12:34:53.431800 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 12:34:53.431811 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 12:34:53.431822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:34:53.431832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:34:53.431844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:34:53.431857 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 12:34:53.431869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 12:34:53.431880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 12:34:53.431891 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 12:34:53.431902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 12:34:53.431914 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 12:34:53.431925 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 12:34:53.431936 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 12:34:53.431948 systemd[1]: Reached target machines.target - Containers. Mar 21 12:34:53.431959 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 12:34:53.431970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:34:53.431980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:34:53.431991 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 12:34:53.432002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:34:53.432012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:34:53.432023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:34:53.432033 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 12:34:53.432045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:34:53.432056 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 12:34:53.432067 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 12:34:53.432078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 12:34:53.432088 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 12:34:53.432098 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 12:34:53.432110 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:34:53.432120 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:34:53.432132 kernel: fuse: init (API version 7.39) Mar 21 12:34:53.432142 kernel: loop: module loaded Mar 21 12:34:53.432151 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:34:53.432161 kernel: ACPI: bus type drm_connector registered Mar 21 12:34:53.432171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 12:34:53.432181 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 12:34:53.432194 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 12:34:53.432399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:34:53.432429 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 12:34:53.432440 systemd[1]: Stopped verity-setup.service. Mar 21 12:34:53.432451 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 12:34:53.432461 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 12:34:53.432472 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 12:34:53.432482 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 12:34:53.432498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 12:34:53.432508 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 12:34:53.432518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:34:53.432563 systemd-journald[1113]: Collecting audit messages is disabled. Mar 21 12:34:53.432585 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 12:34:53.432601 systemd-journald[1113]: Journal started Mar 21 12:34:53.432625 systemd-journald[1113]: Runtime Journal (/run/log/journal/9d9b09c407bb4d61b2c42b803e92412c) is 5.9M, max 47.3M, 41.4M free. Mar 21 12:34:53.218243 systemd[1]: Queued start job for default target multi-user.target. Mar 21 12:34:53.237714 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 12:34:53.238115 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 12:34:53.434057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 12:34:53.436792 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:34:53.437518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:34:53.437712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:34:53.438945 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:34:53.439112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:34:53.440347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 12:34:53.441844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:34:53.442049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:34:53.443311 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 12:34:53.443515 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 12:34:53.444667 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:34:53.444862 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:34:53.446179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:34:53.447459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 12:34:53.448837 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 12:34:53.450085 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 12:34:53.463302 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 12:34:53.465683 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 12:34:53.467608 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 12:34:53.468643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 12:34:53.468687 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:34:53.470399 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 12:34:53.487925 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 12:34:53.490097 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 12:34:53.491279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:34:53.492727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 12:34:53.494867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 12:34:53.495841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:34:53.499893 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 12:34:53.501037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:34:53.503234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:34:53.505196 systemd-journald[1113]: Time spent on flushing to /var/log/journal/9d9b09c407bb4d61b2c42b803e92412c is 12.984ms for 872 entries. Mar 21 12:34:53.505196 systemd-journald[1113]: System Journal (/var/log/journal/9d9b09c407bb4d61b2c42b803e92412c) is 8M, max 195.6M, 187.6M free. Mar 21 12:34:53.534904 systemd-journald[1113]: Received client request to flush runtime journal. Mar 21 12:34:53.509861 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 12:34:53.512088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:34:53.515180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:34:53.516456 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 12:34:53.517619 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 12:34:53.520808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 12:34:53.522051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 12:34:53.530847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 12:34:53.535940 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 12:34:53.542771 kernel: loop0: detected capacity change from 0 to 103832 Mar 21 12:34:53.540889 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 12:34:53.545857 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 12:34:53.547522 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:34:53.552449 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 21 12:34:53.552612 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 21 12:34:53.560823 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:34:53.561854 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 12:34:53.566764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 12:34:53.579623 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 12:34:53.583554 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 21 12:34:53.588108 kernel: loop1: detected capacity change from 0 to 194096 Mar 21 12:34:53.605676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 12:34:53.608487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:34:53.617784 kernel: loop2: detected capacity change from 0 to 126448 Mar 21 12:34:53.628084 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 21 12:34:53.628102 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 21 12:34:53.632223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:34:53.653766 kernel: loop3: detected capacity change from 0 to 103832 Mar 21 12:34:53.658811 kernel: loop4: detected capacity change from 0 to 194096 Mar 21 12:34:53.664860 kernel: loop5: detected capacity change from 0 to 126448 Mar 21 12:34:53.668591 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 21 12:34:53.669039 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 21 12:34:53.672223 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 12:34:53.672243 systemd[1]: Reloading... Mar 21 12:34:53.746885 zram_generator::config[1230]: No configuration found. Mar 21 12:34:53.779817 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 12:34:53.829587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:34:53.879767 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 12:34:53.880034 systemd[1]: Reloading finished in 207 ms. Mar 21 12:34:53.897554 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 12:34:53.900854 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 12:34:53.912082 systemd[1]: Starting ensure-sysext.service... Mar 21 12:34:53.913800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:34:53.926615 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 21 12:34:53.926631 systemd[1]: Reloading... Mar 21 12:34:53.933246 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 12:34:53.933832 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 12:34:53.934577 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 12:34:53.934897 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 21 12:34:53.935030 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 21 12:34:53.937728 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:34:53.937855 systemd-tmpfiles[1259]: Skipping /boot Mar 21 12:34:53.946987 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:34:53.947090 systemd-tmpfiles[1259]: Skipping /boot Mar 21 12:34:53.983874 zram_generator::config[1292]: No configuration found. Mar 21 12:34:54.063194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:34:54.114063 systemd[1]: Reloading finished in 187 ms. Mar 21 12:34:54.124403 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 12:34:54.130254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:34:54.144989 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:34:54.147295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 12:34:54.160481 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 12:34:54.163334 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:34:54.168827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:34:54.174391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 12:34:54.179542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:34:54.180908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:34:54.183920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:34:54.186182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:34:54.189500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:34:54.189626 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:34:54.195074 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 12:34:54.197208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:34:54.198157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:34:54.199639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:34:54.199947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:34:54.201377 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:34:54.201543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:34:54.210714 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 12:34:54.213001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:34:54.214887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:34:54.217224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:34:54.222517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:34:54.223417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:34:54.223532 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:34:54.228800 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 12:34:54.239136 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 12:34:54.240108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:34:54.244042 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 12:34:54.245932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:34:54.247804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:34:54.249172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:34:54.249353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:34:54.250406 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Mar 21 12:34:54.251391 augenrules[1363]: No rules Mar 21 12:34:54.251226 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:34:54.251393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:34:54.252862 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:34:54.253042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:34:54.254586 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 12:34:54.266492 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:34:54.267449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:34:54.269160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:34:54.272009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:34:54.289839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:34:54.295279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:34:54.297632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:34:54.297778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:34:54.297896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:34:54.299026 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:34:54.301210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 12:34:54.302884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:34:54.305012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:34:54.307036 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:34:54.307196 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:34:54.314301 augenrules[1374]: /sbin/augenrules: No change Mar 21 12:34:54.317865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:34:54.318039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:34:54.320023 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:34:54.320170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:34:54.325309 augenrules[1419]: No rules Mar 21 12:34:54.325289 systemd[1]: Finished ensure-sysext.service. Mar 21 12:34:54.326961 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:34:54.328946 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:34:54.339730 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 21 12:34:54.341826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:34:54.342621 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:34:54.342692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:34:54.345958 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 12:34:54.380769 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1382) Mar 21 12:34:54.388260 systemd-resolved[1327]: Positive Trust Anchors: Mar 21 12:34:54.388280 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:34:54.388313 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:34:54.398669 systemd-resolved[1327]: Defaulting to hostname 'linux'. Mar 21 12:34:54.402889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:34:54.406964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:34:54.418018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:34:54.422036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 12:34:54.445794 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 12:34:54.447061 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 12:34:54.450989 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 12:34:54.477387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:34:54.478673 systemd-networkd[1431]: lo: Link UP Mar 21 12:34:54.478682 systemd-networkd[1431]: lo: Gained carrier Mar 21 12:34:54.480157 systemd-networkd[1431]: Enumeration completed Mar 21 12:34:54.481217 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:34:54.482492 systemd[1]: Reached target network.target - Network. Mar 21 12:34:54.483392 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:34:54.483402 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:34:54.484598 systemd-networkd[1431]: eth0: Link UP Mar 21 12:34:54.484607 systemd-networkd[1431]: eth0: Gained carrier Mar 21 12:34:54.484622 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:34:54.484704 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 12:34:54.486765 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 12:34:54.498509 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 12:34:54.498828 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:34:54.499450 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Mar 21 12:34:54.499979 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 21 12:34:54.500039 systemd-timesyncd[1432]: Initial clock synchronization to Fri 2025-03-21 12:34:54.582007 UTC. Mar 21 12:34:54.507919 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 12:34:54.511636 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 12:34:54.516583 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:34:54.531519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:34:54.550236 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 12:34:54.551393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:34:54.552294 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:34:54.553220 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 12:34:54.554264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 12:34:54.555433 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 12:34:54.556360 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 12:34:54.557330 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 12:34:54.558245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 12:34:54.558289 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:34:54.558961 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:34:54.560737 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 12:34:54.563329 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 12:34:54.566409 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 12:34:54.567502 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 12:34:54.568450 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 12:34:54.579875 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 12:34:54.581041 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 12:34:54.582982 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 12:34:54.584248 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 12:34:54.585105 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:34:54.585817 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:34:54.586525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:34:54.586562 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:34:54.587513 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 12:34:54.589267 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 12:34:54.590962 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:34:54.592865 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 12:34:54.596935 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 12:34:54.597727 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 12:34:54.600822 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 12:34:54.603974 jq[1463]: false Mar 21 12:34:54.604407 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 12:34:54.606900 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 12:34:54.608816 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 12:34:54.611790 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 12:34:54.613458 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 12:34:54.613886 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 12:34:54.616990 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 12:34:54.619211 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 12:34:54.621834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 12:34:54.626070 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 12:34:54.626260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 12:34:54.629012 extend-filesystems[1464]: Found loop3 Mar 21 12:34:54.629012 extend-filesystems[1464]: Found loop4 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found loop5 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda1 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda2 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda3 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found usr Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda4 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda6 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda7 Mar 21 12:34:54.632905 extend-filesystems[1464]: Found vda9 Mar 21 12:34:54.632905 extend-filesystems[1464]: Checking size of /dev/vda9 Mar 21 12:34:54.637907 dbus-daemon[1462]: [system] SELinux support is enabled Mar 21 12:34:54.672614 jq[1474]: true Mar 21 12:34:54.635527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 12:34:54.672895 extend-filesystems[1464]: Resized partition /dev/vda9 Mar 21 12:34:54.637772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 12:34:54.681007 tar[1476]: linux-arm64/helm Mar 21 12:34:54.641322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 12:34:54.652171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 12:34:54.681661 jq[1488]: true Mar 21 12:34:54.652224 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 12:34:54.656123 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 12:34:54.656144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 12:34:54.670034 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 12:34:54.672917 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 12:34:54.674785 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 12:34:54.690151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1393) Mar 21 12:34:54.694539 extend-filesystems[1496]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 12:34:54.701761 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 21 12:34:54.708827 update_engine[1472]: I20250321 12:34:54.708081 1472 main.cc:92] Flatcar Update Engine starting Mar 21 12:34:54.718372 systemd[1]: Started update-engine.service - Update Engine. Mar 21 12:34:54.718763 update_engine[1472]: I20250321 12:34:54.718456 1472 update_check_scheduler.cc:74] Next update check in 9m27s Mar 21 12:34:54.725902 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 12:34:54.732759 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 21 12:34:54.742288 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) Mar 21 12:34:54.742693 systemd-logind[1470]: New seat seat0. Mar 21 12:34:54.746513 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 12:34:54.746513 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 12:34:54.746513 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 21 12:34:54.748689 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 12:34:54.755586 extend-filesystems[1464]: Resized filesystem in /dev/vda9 Mar 21 12:34:54.750406 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 12:34:54.751190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 12:34:54.770405 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Mar 21 12:34:54.773059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 12:34:54.775486 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 21 12:34:54.800126 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 12:34:54.905977 containerd[1489]: time="2025-03-21T12:34:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 12:34:54.908334 containerd[1489]: time="2025-03-21T12:34:54.908291320Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 12:34:54.920708 containerd[1489]: time="2025-03-21T12:34:54.920667960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.24µs" Mar 21 12:34:54.920708 containerd[1489]: time="2025-03-21T12:34:54.920703320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 12:34:54.920814 containerd[1489]: time="2025-03-21T12:34:54.920725240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 12:34:54.920901 containerd[1489]: time="2025-03-21T12:34:54.920875480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 12:34:54.920939 containerd[1489]: time="2025-03-21T12:34:54.920901280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 12:34:54.920939 containerd[1489]: time="2025-03-21T12:34:54.920931880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921005 containerd[1489]: time="2025-03-21T12:34:54.920986840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921029 containerd[1489]: time="2025-03-21T12:34:54.921007320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921292 containerd[1489]: time="2025-03-21T12:34:54.921270280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921323 containerd[1489]: time="2025-03-21T12:34:54.921294240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921323 containerd[1489]: time="2025-03-21T12:34:54.921310080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921366 containerd[1489]: time="2025-03-21T12:34:54.921322040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921422 containerd[1489]: time="2025-03-21T12:34:54.921404840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921614 containerd[1489]: time="2025-03-21T12:34:54.921594040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921647 containerd[1489]: time="2025-03-21T12:34:54.921634080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:34:54.921674 containerd[1489]: time="2025-03-21T12:34:54.921645760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 12:34:54.921694 containerd[1489]: time="2025-03-21T12:34:54.921674000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 12:34:54.921999 containerd[1489]: time="2025-03-21T12:34:54.921980040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 12:34:54.922094 containerd[1489]: time="2025-03-21T12:34:54.922048600Z" level=info msg="metadata content store policy set" policy=shared Mar 21 12:34:54.925618 containerd[1489]: time="2025-03-21T12:34:54.925583040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 12:34:54.925669 containerd[1489]: time="2025-03-21T12:34:54.925631480Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 12:34:54.925669 containerd[1489]: time="2025-03-21T12:34:54.925646400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 12:34:54.925669 containerd[1489]: time="2025-03-21T12:34:54.925660720Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 12:34:54.925718 containerd[1489]: time="2025-03-21T12:34:54.925680160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 12:34:54.925718 containerd[1489]: time="2025-03-21T12:34:54.925692200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 12:34:54.925718 containerd[1489]: time="2025-03-21T12:34:54.925703120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 12:34:54.925718 containerd[1489]: time="2025-03-21T12:34:54.925714720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 12:34:54.925810 containerd[1489]: time="2025-03-21T12:34:54.925728120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 12:34:54.925810 containerd[1489]: time="2025-03-21T12:34:54.925739000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 12:34:54.925810 containerd[1489]: time="2025-03-21T12:34:54.925759520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 12:34:54.925810 containerd[1489]: time="2025-03-21T12:34:54.925776760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925878360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925907640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925919840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925929600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925939560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925950200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 12:34:54.925960 containerd[1489]: time="2025-03-21T12:34:54.925962280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 12:34:54.926148 containerd[1489]: time="2025-03-21T12:34:54.925972360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 12:34:54.926148 containerd[1489]: time="2025-03-21T12:34:54.925991360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 12:34:54.926148 containerd[1489]: time="2025-03-21T12:34:54.926001920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 12:34:54.926148 containerd[1489]: time="2025-03-21T12:34:54.926012320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 12:34:54.926281 containerd[1489]: time="2025-03-21T12:34:54.926264400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 12:34:54.926312 containerd[1489]: time="2025-03-21T12:34:54.926282480Z" level=info msg="Start snapshots syncer" Mar 21 12:34:54.926312 containerd[1489]: time="2025-03-21T12:34:54.926303360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 12:34:54.926587 containerd[1489]: time="2025-03-21T12:34:54.926528480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 12:34:54.926587 containerd[1489]: time="2025-03-21T12:34:54.926584280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 12:34:54.926711 containerd[1489]: time="2025-03-21T12:34:54.926647160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926739680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926788360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926800800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926810720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926822680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926833640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926844320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926868160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926879640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926889480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926922640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926935200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:34:54.927007 containerd[1489]: time="2025-03-21T12:34:54.926944360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.926953600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.926961440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.926970400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.926980080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.927057360Z" level=info msg="runtime interface created" Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.927063480Z" level=info msg="created NRI interface" Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.927071600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.927082120Z" level=info msg="Connect containerd service" Mar 21 12:34:54.927250 containerd[1489]: time="2025-03-21T12:34:54.927107560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 12:34:54.927681 containerd[1489]: time="2025-03-21T12:34:54.927644400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:34:55.032276 containerd[1489]: time="2025-03-21T12:34:55.032228705Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 12:34:55.032427 containerd[1489]: time="2025-03-21T12:34:55.032293958Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 12:34:55.032427 containerd[1489]: time="2025-03-21T12:34:55.032331550Z" level=info msg="Start subscribing containerd event" Mar 21 12:34:55.032427 containerd[1489]: time="2025-03-21T12:34:55.032367855Z" level=info msg="Start recovering state" Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032441833Z" level=info msg="Start event monitor" Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032454699Z" level=info msg="Start cni network conf syncer for default" Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032461172Z" level=info msg="Start streaming server" Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032468610Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032475163Z" level=info msg="runtime interface starting up..." Mar 21 12:34:55.032480 containerd[1489]: time="2025-03-21T12:34:55.032480350Z" level=info msg="starting plugins..." Mar 21 12:34:55.032591 containerd[1489]: time="2025-03-21T12:34:55.032492572Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 12:34:55.033539 containerd[1489]: time="2025-03-21T12:34:55.032608283Z" level=info msg="containerd successfully booted in 0.127035s" Mar 21 12:34:55.032710 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 12:34:55.053430 tar[1476]: linux-arm64/LICENSE Mar 21 12:34:55.054775 tar[1476]: linux-arm64/README.md Mar 21 12:34:55.069889 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 12:34:56.396786 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 12:34:56.415143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 12:34:56.420455 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 12:34:56.436113 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 12:34:56.437791 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 12:34:56.440200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 12:34:56.463333 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 12:34:56.465902 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 12:34:56.467677 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 21 12:34:56.468764 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 12:34:56.550048 systemd-networkd[1431]: eth0: Gained IPv6LL Mar 21 12:34:56.552394 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 12:34:56.554397 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 12:34:56.557049 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 21 12:34:56.559193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:34:56.570777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 12:34:56.585887 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 21 12:34:56.586074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 21 12:34:56.588025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 12:34:56.591540 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 12:34:56.721463 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 12:34:56.723762 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:52496.service - OpenSSH per-connection server daemon (10.0.0.1:52496). Mar 21 12:34:56.800211 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 52496 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:56.802051 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:56.808183 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 12:34:56.810316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 12:34:56.817268 systemd-logind[1470]: New session 1 of user core. Mar 21 12:34:56.839254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 12:34:56.844924 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 12:34:56.858500 (systemd)[1591]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 12:34:56.860610 systemd-logind[1470]: New session c1 of user core. Mar 21 12:34:56.967271 systemd[1591]: Queued start job for default target default.target. Mar 21 12:34:56.981713 systemd[1591]: Created slice app.slice - User Application Slice. Mar 21 12:34:56.981745 systemd[1591]: Reached target paths.target - Paths. Mar 21 12:34:56.981804 systemd[1591]: Reached target timers.target - Timers. Mar 21 12:34:56.983057 systemd[1591]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 12:34:56.992318 systemd[1591]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 12:34:56.992383 systemd[1591]: Reached target sockets.target - Sockets. Mar 21 12:34:56.992421 systemd[1591]: Reached target basic.target - Basic System. Mar 21 12:34:56.992449 systemd[1591]: Reached target default.target - Main User Target. Mar 21 12:34:56.992476 systemd[1591]: Startup finished in 126ms. Mar 21 12:34:56.992713 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 12:34:56.994907 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 12:34:57.061699 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:52500.service - OpenSSH per-connection server daemon (10.0.0.1:52500). Mar 21 12:34:57.066800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:34:57.068415 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 12:34:57.070304 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:34:57.074850 systemd[1]: Startup finished in 517ms (kernel) + 9.120s (initrd) + 4.294s (userspace) = 13.932s. Mar 21 12:34:57.110409 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.111704 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.121776 systemd-logind[1470]: New session 2 of user core. Mar 21 12:34:57.127918 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 12:34:57.178910 sshd[1614]: Connection closed by 10.0.0.1 port 52500 Mar 21 12:34:57.179236 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Mar 21 12:34:57.192964 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:52500.service: Deactivated successfully. Mar 21 12:34:57.195328 systemd[1]: session-2.scope: Deactivated successfully. Mar 21 12:34:57.198740 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Mar 21 12:34:57.200603 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:52504.service - OpenSSH per-connection server daemon (10.0.0.1:52504). Mar 21 12:34:57.201309 systemd-logind[1470]: Removed session 2. Mar 21 12:34:57.243168 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 52504 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.244288 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.248289 systemd-logind[1470]: New session 3 of user core. Mar 21 12:34:57.259096 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 12:34:57.306869 sshd[1627]: Connection closed by 10.0.0.1 port 52504 Mar 21 12:34:57.307150 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Mar 21 12:34:57.324837 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:52504.service: Deactivated successfully. Mar 21 12:34:57.326277 systemd[1]: session-3.scope: Deactivated successfully. Mar 21 12:34:57.327888 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Mar 21 12:34:57.329819 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:52514.service - OpenSSH per-connection server daemon (10.0.0.1:52514). Mar 21 12:34:57.330896 systemd-logind[1470]: Removed session 3. Mar 21 12:34:57.383742 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 52514 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.384842 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.389078 systemd-logind[1470]: New session 4 of user core. Mar 21 12:34:57.405887 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 12:34:57.456449 sshd[1636]: Connection closed by 10.0.0.1 port 52514 Mar 21 12:34:57.456932 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Mar 21 12:34:57.468014 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:52514.service: Deactivated successfully. Mar 21 12:34:57.469520 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 12:34:57.470137 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Mar 21 12:34:57.472030 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:52526.service - OpenSSH per-connection server daemon (10.0.0.1:52526). Mar 21 12:34:57.473267 systemd-logind[1470]: Removed session 4. Mar 21 12:34:57.520389 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 52526 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.521651 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.525629 systemd-logind[1470]: New session 5 of user core. Mar 21 12:34:57.536890 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 12:34:57.572692 kubelet[1608]: E0321 12:34:57.572656 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:34:57.575185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:34:57.575328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:34:57.575680 systemd[1]: kubelet.service: Consumed 828ms CPU time, 241.3M memory peak. Mar 21 12:34:57.594208 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 12:34:57.594464 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:34:57.608717 sudo[1648]: pam_unix(sudo:session): session closed for user root Mar 21 12:34:57.610626 sshd[1646]: Connection closed by 10.0.0.1 port 52526 Mar 21 12:34:57.610521 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 21 12:34:57.624900 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:52526.service: Deactivated successfully. Mar 21 12:34:57.626386 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 12:34:57.627048 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Mar 21 12:34:57.628818 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:52542.service - OpenSSH per-connection server daemon (10.0.0.1:52542). Mar 21 12:34:57.629644 systemd-logind[1470]: Removed session 5. Mar 21 12:34:57.673134 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 52542 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.674192 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.677680 systemd-logind[1470]: New session 6 of user core. Mar 21 12:34:57.685923 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 12:34:57.735464 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 12:34:57.735737 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:34:57.738377 sudo[1658]: pam_unix(sudo:session): session closed for user root Mar 21 12:34:57.742580 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 12:34:57.742881 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:34:57.749891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:34:57.784866 augenrules[1680]: No rules Mar 21 12:34:57.785865 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:34:57.786079 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:34:57.787159 sudo[1657]: pam_unix(sudo:session): session closed for user root Mar 21 12:34:57.788166 sshd[1656]: Connection closed by 10.0.0.1 port 52542 Mar 21 12:34:57.788459 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Mar 21 12:34:57.807733 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:52542.service: Deactivated successfully. Mar 21 12:34:57.809187 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 12:34:57.810370 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Mar 21 12:34:57.811470 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:52554.service - OpenSSH per-connection server daemon (10.0.0.1:52554). Mar 21 12:34:57.812204 systemd-logind[1470]: Removed session 6. Mar 21 12:34:57.857672 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 52554 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:34:57.858664 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:34:57.862145 systemd-logind[1470]: New session 7 of user core. Mar 21 12:34:57.874874 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 12:34:57.924150 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 12:34:57.924673 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:34:58.259290 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 12:34:58.272040 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 12:34:58.528308 dockerd[1712]: time="2025-03-21T12:34:58.528184516Z" level=info msg="Starting up" Mar 21 12:34:58.530080 dockerd[1712]: time="2025-03-21T12:34:58.530054911Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 12:34:58.719534 dockerd[1712]: time="2025-03-21T12:34:58.719486372Z" level=info msg="Loading containers: start." Mar 21 12:34:58.854784 kernel: Initializing XFRM netlink socket Mar 21 12:34:58.910069 systemd-networkd[1431]: docker0: Link UP Mar 21 12:34:58.966958 dockerd[1712]: time="2025-03-21T12:34:58.966917969Z" level=info msg="Loading containers: done." Mar 21 12:34:58.980602 dockerd[1712]: time="2025-03-21T12:34:58.980203428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 12:34:58.980602 dockerd[1712]: time="2025-03-21T12:34:58.980282401Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 12:34:58.980602 dockerd[1712]: time="2025-03-21T12:34:58.980449304Z" level=info msg="Daemon has completed initialization" Mar 21 12:34:59.007963 dockerd[1712]: time="2025-03-21T12:34:59.007879294Z" level=info msg="API listen on /run/docker.sock" Mar 21 12:34:59.008157 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 12:34:59.778177 containerd[1489]: time="2025-03-21T12:34:59.777969942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 21 12:35:00.402190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169290162.mount: Deactivated successfully. Mar 21 12:35:01.701572 containerd[1489]: time="2025-03-21T12:35:01.701521685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:01.702481 containerd[1489]: time="2025-03-21T12:35:01.702402744Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 21 12:35:01.703057 containerd[1489]: time="2025-03-21T12:35:01.703018281Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:01.705440 containerd[1489]: time="2025-03-21T12:35:01.705413678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:01.706843 containerd[1489]: time="2025-03-21T12:35:01.706806353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 1.928794369s" Mar 21 12:35:01.706908 containerd[1489]: time="2025-03-21T12:35:01.706846051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 21 12:35:01.721842 containerd[1489]: time="2025-03-21T12:35:01.721812575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 21 12:35:03.426291 containerd[1489]: time="2025-03-21T12:35:03.426207899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:03.427372 containerd[1489]: time="2025-03-21T12:35:03.427208032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 21 12:35:03.428153 containerd[1489]: time="2025-03-21T12:35:03.428097347Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:03.432089 containerd[1489]: time="2025-03-21T12:35:03.432051338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:03.432933 containerd[1489]: time="2025-03-21T12:35:03.432810977Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.710966937s" Mar 21 12:35:03.432933 containerd[1489]: time="2025-03-21T12:35:03.432851581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 21 12:35:03.451153 containerd[1489]: time="2025-03-21T12:35:03.450916932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 21 12:35:04.890084 containerd[1489]: time="2025-03-21T12:35:04.890036087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:04.891072 containerd[1489]: time="2025-03-21T12:35:04.890793857Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 21 12:35:04.891725 containerd[1489]: time="2025-03-21T12:35:04.891694876Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:04.894872 containerd[1489]: time="2025-03-21T12:35:04.894817936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:04.895853 containerd[1489]: time="2025-03-21T12:35:04.895826383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.444874506s" Mar 21 12:35:04.895888 containerd[1489]: time="2025-03-21T12:35:04.895859638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 21 12:35:04.910911 containerd[1489]: time="2025-03-21T12:35:04.910824556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 21 12:35:06.017114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524862202.mount: Deactivated successfully. Mar 21 12:35:06.205873 containerd[1489]: time="2025-03-21T12:35:06.205819601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:06.206958 containerd[1489]: time="2025-03-21T12:35:06.206911750Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 21 12:35:06.209310 containerd[1489]: time="2025-03-21T12:35:06.207726440Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:06.209960 containerd[1489]: time="2025-03-21T12:35:06.209932473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:06.211250 containerd[1489]: time="2025-03-21T12:35:06.211223522Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.300363587s" Mar 21 12:35:06.213881 containerd[1489]: time="2025-03-21T12:35:06.211332596Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 21 12:35:06.230739 containerd[1489]: time="2025-03-21T12:35:06.230706798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 21 12:35:06.782418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546666100.mount: Deactivated successfully. Mar 21 12:35:07.594813 containerd[1489]: time="2025-03-21T12:35:07.594763430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:07.595901 containerd[1489]: time="2025-03-21T12:35:07.595854124Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 21 12:35:07.597102 containerd[1489]: time="2025-03-21T12:35:07.596764512Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:07.599682 containerd[1489]: time="2025-03-21T12:35:07.599653009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:07.600918 containerd[1489]: time="2025-03-21T12:35:07.600847147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.369965873s" Mar 21 12:35:07.600980 containerd[1489]: time="2025-03-21T12:35:07.600916270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 21 12:35:07.616018 containerd[1489]: time="2025-03-21T12:35:07.615979217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 21 12:35:07.825834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 12:35:07.827541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:07.948023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:07.951495 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:35:07.988617 kubelet[2095]: E0321 12:35:07.988567 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:35:07.991645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:35:07.991816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:35:07.992201 systemd[1]: kubelet.service: Consumed 135ms CPU time, 97.4M memory peak. Mar 21 12:35:08.142127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442033723.mount: Deactivated successfully. Mar 21 12:35:08.147378 containerd[1489]: time="2025-03-21T12:35:08.147330818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:08.148040 containerd[1489]: time="2025-03-21T12:35:08.147990198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 21 12:35:08.148679 containerd[1489]: time="2025-03-21T12:35:08.148648013Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:08.150564 containerd[1489]: time="2025-03-21T12:35:08.150516548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:08.151206 containerd[1489]: time="2025-03-21T12:35:08.151118439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 535.10803ms" Mar 21 12:35:08.151206 containerd[1489]: time="2025-03-21T12:35:08.151153277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 21 12:35:08.166597 containerd[1489]: time="2025-03-21T12:35:08.166516872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 21 12:35:08.676495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778228947.mount: Deactivated successfully. Mar 21 12:35:10.901363 containerd[1489]: time="2025-03-21T12:35:10.901309568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:10.901927 containerd[1489]: time="2025-03-21T12:35:10.901872062Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 21 12:35:10.902687 containerd[1489]: time="2025-03-21T12:35:10.902626690Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:10.914901 containerd[1489]: time="2025-03-21T12:35:10.914830149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:10.916110 containerd[1489]: time="2025-03-21T12:35:10.916029322Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.749478818s" Mar 21 12:35:10.916110 containerd[1489]: time="2025-03-21T12:35:10.916068638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 21 12:35:15.472851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:15.472984 systemd[1]: kubelet.service: Consumed 135ms CPU time, 97.4M memory peak. Mar 21 12:35:15.474922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:15.491810 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Mar 21 12:35:15.491827 systemd[1]: Reloading... Mar 21 12:35:15.560784 zram_generator::config[2303]: No configuration found. Mar 21 12:35:15.652377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:35:15.724331 systemd[1]: Reloading finished in 232 ms. Mar 21 12:35:15.771227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:15.772883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:15.774935 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:35:15.775138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:15.775176 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. Mar 21 12:35:15.776553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:15.899650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:15.903606 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:35:16.022124 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:16.022124 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:35:16.022124 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:16.023165 kubelet[2349]: I0321 12:35:16.023112 2349 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:35:16.817579 kubelet[2349]: I0321 12:35:16.817524 2349 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:35:16.817579 kubelet[2349]: I0321 12:35:16.817554 2349 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:35:16.817778 kubelet[2349]: I0321 12:35:16.817762 2349 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:35:16.834352 kubelet[2349]: E0321 12:35:16.834317 2349 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.834467 kubelet[2349]: I0321 12:35:16.834385 2349 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:35:16.856923 kubelet[2349]: I0321 12:35:16.856888 2349 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:35:16.858646 kubelet[2349]: I0321 12:35:16.858592 2349 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:35:16.858844 kubelet[2349]: I0321 12:35:16.858646 2349 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:35:16.859051 kubelet[2349]: I0321 12:35:16.859030 2349 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:35:16.859051 kubelet[2349]: I0321 12:35:16.859043 2349 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:35:16.859344 kubelet[2349]: I0321 12:35:16.859324 2349 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:16.860996 kubelet[2349]: I0321 12:35:16.860974 2349 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:35:16.861044 kubelet[2349]: I0321 12:35:16.860998 2349 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:35:16.861774 kubelet[2349]: I0321 12:35:16.861758 2349 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:35:16.862295 kubelet[2349]: I0321 12:35:16.862275 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:35:16.866844 kubelet[2349]: W0321 12:35:16.866793 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.866898 kubelet[2349]: E0321 12:35:16.866850 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.866898 kubelet[2349]: W0321 12:35:16.866793 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.866898 kubelet[2349]: E0321 12:35:16.866873 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.871820 kubelet[2349]: I0321 12:35:16.871004 2349 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:35:16.871948 kubelet[2349]: I0321 12:35:16.871930 2349 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:35:16.872483 kubelet[2349]: W0321 12:35:16.872458 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 12:35:16.874024 kubelet[2349]: I0321 12:35:16.873989 2349 server.go:1264] "Started kubelet" Mar 21 12:35:16.876284 kubelet[2349]: I0321 12:35:16.875495 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:35:16.876284 kubelet[2349]: I0321 12:35:16.875817 2349 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:35:16.876284 kubelet[2349]: I0321 12:35:16.875866 2349 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:35:16.876390 kubelet[2349]: I0321 12:35:16.876325 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:35:16.878940 kubelet[2349]: I0321 12:35:16.878916 2349 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:35:16.886636 kubelet[2349]: E0321 12:35:16.880554 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed18ebfe4e529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:35:16.873970985 +0000 UTC m=+0.967365124,LastTimestamp:2025-03-21 12:35:16.873970985 +0000 UTC m=+0.967365124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:35:16.886636 kubelet[2349]: I0321 12:35:16.882206 2349 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:35:16.886636 kubelet[2349]: I0321 12:35:16.883879 2349 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:35:16.886636 kubelet[2349]: I0321 12:35:16.883959 2349 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:35:16.886636 kubelet[2349]: W0321 12:35:16.884302 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.886636 kubelet[2349]: E0321 12:35:16.884344 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.886636 kubelet[2349]: E0321 12:35:16.884648 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Mar 21 12:35:16.886916 kubelet[2349]: E0321 12:35:16.885138 2349 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:35:16.887625 kubelet[2349]: I0321 12:35:16.887602 2349 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:35:16.887625 kubelet[2349]: I0321 12:35:16.887622 2349 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:35:16.887770 kubelet[2349]: I0321 12:35:16.887725 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:35:16.898305 kubelet[2349]: I0321 12:35:16.898254 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:35:16.899435 kubelet[2349]: I0321 12:35:16.899402 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:35:16.899435 kubelet[2349]: I0321 12:35:16.899427 2349 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:35:16.900470 kubelet[2349]: I0321 12:35:16.899892 2349 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:35:16.900470 kubelet[2349]: E0321 12:35:16.899950 2349 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:35:16.900470 kubelet[2349]: W0321 12:35:16.900460 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.900582 kubelet[2349]: E0321 12:35:16.900492 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:16.902562 kubelet[2349]: I0321 12:35:16.901948 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:35:16.902562 kubelet[2349]: I0321 12:35:16.901964 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:35:16.902562 kubelet[2349]: I0321 12:35:16.901981 2349 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:16.905671 kubelet[2349]: I0321 12:35:16.905631 2349 policy_none.go:49] "None policy: Start" Mar 21 12:35:16.906282 kubelet[2349]: I0321 12:35:16.906249 2349 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:35:16.906282 kubelet[2349]: I0321 12:35:16.906280 2349 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:35:16.911603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 12:35:16.930589 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 12:35:16.933297 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 12:35:16.944492 kubelet[2349]: I0321 12:35:16.944466 2349 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:35:16.945328 kubelet[2349]: I0321 12:35:16.944912 2349 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:35:16.945328 kubelet[2349]: I0321 12:35:16.945034 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:35:16.946375 kubelet[2349]: E0321 12:35:16.946352 2349 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 21 12:35:16.985839 kubelet[2349]: I0321 12:35:16.985817 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:16.986236 kubelet[2349]: E0321 12:35:16.986191 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 21 12:35:17.000464 kubelet[2349]: I0321 12:35:17.000405 2349 topology_manager.go:215] "Topology Admit Handler" podUID="67eb3feb5e4f77626ae6242d7064bc42" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:35:17.001322 kubelet[2349]: I0321 12:35:17.001296 2349 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:35:17.002360 kubelet[2349]: I0321 12:35:17.002325 2349 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:35:17.009046 systemd[1]: Created slice kubepods-burstable-pod67eb3feb5e4f77626ae6242d7064bc42.slice - libcontainer container kubepods-burstable-pod67eb3feb5e4f77626ae6242d7064bc42.slice. Mar 21 12:35:17.027226 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 21 12:35:17.030067 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 21 12:35:17.085399 kubelet[2349]: E0321 12:35:17.085274 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Mar 21 12:35:17.185847 kubelet[2349]: I0321 12:35:17.185804 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:17.185847 kubelet[2349]: I0321 12:35:17.185848 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:35:17.185988 kubelet[2349]: I0321 12:35:17.185879 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:17.185988 kubelet[2349]: I0321 12:35:17.185898 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:17.185988 kubelet[2349]: I0321 12:35:17.185923 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:17.185988 kubelet[2349]: I0321 12:35:17.185938 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:17.185988 kubelet[2349]: I0321 12:35:17.185953 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:17.186095 kubelet[2349]: I0321 12:35:17.185970 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:17.186095 kubelet[2349]: I0321 12:35:17.185988 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:17.187788 kubelet[2349]: I0321 12:35:17.187768 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:17.188165 kubelet[2349]: E0321 12:35:17.188108 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 21 12:35:17.324890 kubelet[2349]: E0321 12:35:17.324855 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:17.325565 containerd[1489]: time="2025-03-21T12:35:17.325516176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67eb3feb5e4f77626ae6242d7064bc42,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:17.329022 kubelet[2349]: E0321 12:35:17.328991 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:17.329367 containerd[1489]: time="2025-03-21T12:35:17.329336750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:17.332658 kubelet[2349]: E0321 12:35:17.332631 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:17.332983 containerd[1489]: time="2025-03-21T12:35:17.332955033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:17.486643 kubelet[2349]: E0321 12:35:17.486541 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Mar 21 12:35:17.590032 kubelet[2349]: I0321 12:35:17.589980 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:17.590345 kubelet[2349]: E0321 12:35:17.590313 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 21 12:35:17.890064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089389519.mount: Deactivated successfully. Mar 21 12:35:17.893587 containerd[1489]: time="2025-03-21T12:35:17.893537909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:17.894730 kubelet[2349]: W0321 12:35:17.894685 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.894832 kubelet[2349]: E0321 12:35:17.894740 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.895354 containerd[1489]: time="2025-03-21T12:35:17.895303497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 21 12:35:17.897034 containerd[1489]: time="2025-03-21T12:35:17.896990507Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:17.898422 containerd[1489]: time="2025-03-21T12:35:17.898393486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:17.899610 containerd[1489]: time="2025-03-21T12:35:17.899577072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:35:17.900300 containerd[1489]: time="2025-03-21T12:35:17.900277260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:17.900877 containerd[1489]: time="2025-03-21T12:35:17.900653366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:35:17.901341 containerd[1489]: time="2025-03-21T12:35:17.901313143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:17.902765 containerd[1489]: time="2025-03-21T12:35:17.902711997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 571.843029ms" Mar 21 12:35:17.903358 containerd[1489]: time="2025-03-21T12:35:17.903274133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 575.455345ms" Mar 21 12:35:17.907401 containerd[1489]: time="2025-03-21T12:35:17.907373292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 572.842987ms" Mar 21 12:35:17.917055 kubelet[2349]: W0321 12:35:17.917002 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.917055 kubelet[2349]: E0321 12:35:17.917039 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.919621 containerd[1489]: time="2025-03-21T12:35:17.919591551Z" level=info msg="connecting to shim d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2" address="unix:///run/containerd/s/99f9dea3b5eb0c8fca65088497169431e8deead35dc74aa901f106c74022408d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:17.923500 containerd[1489]: time="2025-03-21T12:35:17.923236027Z" level=info msg="connecting to shim 65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6" address="unix:///run/containerd/s/6af75a09b1d58a727000ccf5ca25a6ab880d004b9e959888cc12c5cbf180db1b" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:17.931104 containerd[1489]: time="2025-03-21T12:35:17.931060402Z" level=info msg="connecting to shim 0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18" address="unix:///run/containerd/s/405e8b107565460be9326ab7e79bfea55ec7c538136359498a86581fcee7d81d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:17.949925 systemd[1]: Started cri-containerd-d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2.scope - libcontainer container d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2. Mar 21 12:35:17.954628 systemd[1]: Started cri-containerd-0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18.scope - libcontainer container 0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18. Mar 21 12:35:17.955764 systemd[1]: Started cri-containerd-65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6.scope - libcontainer container 65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6. Mar 21 12:35:17.967175 kubelet[2349]: W0321 12:35:17.967117 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.967175 kubelet[2349]: E0321 12:35:17.967176 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Mar 21 12:35:17.986054 containerd[1489]: time="2025-03-21T12:35:17.985996111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2\"" Mar 21 12:35:17.986904 kubelet[2349]: E0321 12:35:17.986876 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:17.994492 containerd[1489]: time="2025-03-21T12:35:17.994458957Z" level=info msg="CreateContainer within sandbox \"d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 12:35:17.995772 containerd[1489]: time="2025-03-21T12:35:17.995725886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18\"" Mar 21 12:35:17.996636 kubelet[2349]: E0321 12:35:17.996603 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:17.998828 containerd[1489]: time="2025-03-21T12:35:17.998473731Z" level=info msg="CreateContainer within sandbox \"0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 12:35:18.001441 containerd[1489]: time="2025-03-21T12:35:18.001410519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67eb3feb5e4f77626ae6242d7064bc42,Namespace:kube-system,Attempt:0,} returns sandbox id \"65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6\"" Mar 21 12:35:18.002327 kubelet[2349]: E0321 12:35:18.002298 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:18.004424 containerd[1489]: time="2025-03-21T12:35:18.004399110Z" level=info msg="CreateContainer within sandbox \"65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 12:35:18.004905 containerd[1489]: time="2025-03-21T12:35:18.004872340Z" level=info msg="Container b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:18.005720 containerd[1489]: time="2025-03-21T12:35:18.005695456Z" level=info msg="Container 911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:18.015545 containerd[1489]: time="2025-03-21T12:35:18.015499766Z" level=info msg="CreateContainer within sandbox \"d384398661b593785a9f2d37c1805cfe1806e7417749c06e8611d06fedbe0ef2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb\"" Mar 21 12:35:18.016071 containerd[1489]: time="2025-03-21T12:35:18.016045841Z" level=info msg="StartContainer for \"b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb\"" Mar 21 12:35:18.017099 containerd[1489]: time="2025-03-21T12:35:18.017067908Z" level=info msg="connecting to shim b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb" address="unix:///run/containerd/s/99f9dea3b5eb0c8fca65088497169431e8deead35dc74aa901f106c74022408d" protocol=ttrpc version=3 Mar 21 12:35:18.021353 containerd[1489]: time="2025-03-21T12:35:18.021314521Z" level=info msg="CreateContainer within sandbox \"0afa0d8b6a43dd606096a04317cb762bf7441179ce07e91f9f961b3c2dc55f18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675\"" Mar 21 12:35:18.021786 containerd[1489]: time="2025-03-21T12:35:18.021760840Z" level=info msg="StartContainer for \"911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675\"" Mar 21 12:35:18.023285 containerd[1489]: time="2025-03-21T12:35:18.022365943Z" level=info msg="Container 70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:18.023285 containerd[1489]: time="2025-03-21T12:35:18.022970205Z" level=info msg="connecting to shim 911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675" address="unix:///run/containerd/s/405e8b107565460be9326ab7e79bfea55ec7c538136359498a86581fcee7d81d" protocol=ttrpc version=3 Mar 21 12:35:18.029308 containerd[1489]: time="2025-03-21T12:35:18.029272927Z" level=info msg="CreateContainer within sandbox \"65c804a4e49f34d8ebefe2b69ab693efc0ad9942e4653181e878a201045d7dc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b\"" Mar 21 12:35:18.031939 containerd[1489]: time="2025-03-21T12:35:18.031810234Z" level=info msg="StartContainer for \"70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b\"" Mar 21 12:35:18.033061 containerd[1489]: time="2025-03-21T12:35:18.033031012Z" level=info msg="connecting to shim 70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b" address="unix:///run/containerd/s/6af75a09b1d58a727000ccf5ca25a6ab880d004b9e959888cc12c5cbf180db1b" protocol=ttrpc version=3 Mar 21 12:35:18.039903 systemd[1]: Started cri-containerd-b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb.scope - libcontainer container b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb. Mar 21 12:35:18.042566 systemd[1]: Started cri-containerd-911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675.scope - libcontainer container 911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675. Mar 21 12:35:18.048850 systemd[1]: Started cri-containerd-70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b.scope - libcontainer container 70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b. Mar 21 12:35:18.085550 containerd[1489]: time="2025-03-21T12:35:18.085494840Z" level=info msg="StartContainer for \"b8479d898e1b25bf297ff40195af3d8ac8f256d65a88089c344cfbb6d36268fb\" returns successfully" Mar 21 12:35:18.092010 containerd[1489]: time="2025-03-21T12:35:18.091961673Z" level=info msg="StartContainer for \"911406cb2688b7bdb79314087af928349501e069197ca5e8a2e1be4a21b93675\" returns successfully" Mar 21 12:35:18.126539 containerd[1489]: time="2025-03-21T12:35:18.126158640Z" level=info msg="StartContainer for \"70b6721e246e154c728ef62245a0d7c4921039c19bd77089a99ba85afdbf386b\" returns successfully" Mar 21 12:35:18.393000 kubelet[2349]: I0321 12:35:18.392376 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:18.910757 kubelet[2349]: E0321 12:35:18.910062 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:18.911808 kubelet[2349]: E0321 12:35:18.911783 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:18.914174 kubelet[2349]: E0321 12:35:18.914150 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:19.916194 kubelet[2349]: E0321 12:35:19.916164 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:20.029965 kubelet[2349]: E0321 12:35:20.029919 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 21 12:35:20.102892 kubelet[2349]: I0321 12:35:20.102818 2349 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:35:20.863730 kubelet[2349]: I0321 12:35:20.863678 2349 apiserver.go:52] "Watching apiserver" Mar 21 12:35:20.884048 kubelet[2349]: I0321 12:35:20.884002 2349 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:35:21.960166 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-7.scope)... Mar 21 12:35:21.960183 systemd[1]: Reloading... Mar 21 12:35:22.028894 zram_generator::config[2672]: No configuration found. Mar 21 12:35:22.132918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:35:22.217488 systemd[1]: Reloading finished in 257 ms. Mar 21 12:35:22.239267 kubelet[2349]: E0321 12:35:22.238978 2349 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182ed18ebfe4e529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:35:16.873970985 +0000 UTC m=+0.967365124,LastTimestamp:2025-03-21 12:35:16.873970985 +0000 UTC m=+0.967365124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:35:22.239129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:22.244605 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:35:22.245625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:22.245682 systemd[1]: kubelet.service: Consumed 1.258s CPU time, 116.7M memory peak. Mar 21 12:35:22.247374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:22.380616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:22.385022 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:35:22.432681 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:22.432681 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:35:22.432681 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:22.433021 kubelet[2714]: I0321 12:35:22.432718 2714 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:35:22.436763 kubelet[2714]: I0321 12:35:22.436544 2714 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:35:22.436763 kubelet[2714]: I0321 12:35:22.436567 2714 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:35:22.436763 kubelet[2714]: I0321 12:35:22.436719 2714 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:35:22.438022 kubelet[2714]: I0321 12:35:22.438000 2714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 12:35:22.439393 kubelet[2714]: I0321 12:35:22.439280 2714 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:35:22.444036 kubelet[2714]: I0321 12:35:22.444017 2714 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:35:22.444248 kubelet[2714]: I0321 12:35:22.444219 2714 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:35:22.444393 kubelet[2714]: I0321 12:35:22.444249 2714 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:35:22.444476 kubelet[2714]: I0321 12:35:22.444399 2714 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:35:22.444476 kubelet[2714]: I0321 12:35:22.444408 2714 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:35:22.444476 kubelet[2714]: I0321 12:35:22.444438 2714 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:22.444541 kubelet[2714]: I0321 12:35:22.444532 2714 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:35:22.444562 kubelet[2714]: I0321 12:35:22.444543 2714 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:35:22.444581 kubelet[2714]: I0321 12:35:22.444571 2714 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:35:22.444604 kubelet[2714]: I0321 12:35:22.444583 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:35:22.445209 kubelet[2714]: I0321 12:35:22.445111 2714 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:35:22.445344 kubelet[2714]: I0321 12:35:22.445328 2714 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:35:22.445987 kubelet[2714]: I0321 12:35:22.445967 2714 server.go:1264] "Started kubelet" Mar 21 12:35:22.446360 kubelet[2714]: I0321 12:35:22.446314 2714 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:35:22.446849 kubelet[2714]: I0321 12:35:22.446409 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:35:22.446849 kubelet[2714]: I0321 12:35:22.446618 2714 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:35:22.448680 kubelet[2714]: I0321 12:35:22.448644 2714 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:35:22.452920 kubelet[2714]: I0321 12:35:22.448779 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:35:22.456845 kubelet[2714]: I0321 12:35:22.456828 2714 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:35:22.464489 kubelet[2714]: I0321 12:35:22.463307 2714 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:35:22.464489 kubelet[2714]: I0321 12:35:22.463683 2714 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:35:22.465800 kubelet[2714]: I0321 12:35:22.465521 2714 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:35:22.465800 kubelet[2714]: I0321 12:35:22.465600 2714 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:35:22.467903 kubelet[2714]: E0321 12:35:22.466763 2714 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:35:22.468094 kubelet[2714]: I0321 12:35:22.468046 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:35:22.469263 kubelet[2714]: I0321 12:35:22.468647 2714 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:35:22.470285 kubelet[2714]: I0321 12:35:22.470244 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:35:22.470285 kubelet[2714]: I0321 12:35:22.470287 2714 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:35:22.470285 kubelet[2714]: I0321 12:35:22.470302 2714 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:35:22.470429 kubelet[2714]: E0321 12:35:22.470337 2714 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:35:22.495249 kubelet[2714]: I0321 12:35:22.495223 2714 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:35:22.495726 kubelet[2714]: I0321 12:35:22.495437 2714 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:35:22.495726 kubelet[2714]: I0321 12:35:22.495465 2714 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:22.495726 kubelet[2714]: I0321 12:35:22.495616 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 12:35:22.495726 kubelet[2714]: I0321 12:35:22.495626 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 12:35:22.495726 kubelet[2714]: I0321 12:35:22.495644 2714 policy_none.go:49] "None policy: Start" Mar 21 12:35:22.496578 kubelet[2714]: I0321 12:35:22.496342 2714 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:35:22.496578 kubelet[2714]: I0321 12:35:22.496372 2714 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:35:22.496578 kubelet[2714]: I0321 12:35:22.496491 2714 state_mem.go:75] "Updated machine memory state" Mar 21 12:35:22.502388 kubelet[2714]: I0321 12:35:22.502290 2714 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:35:22.502487 kubelet[2714]: I0321 12:35:22.502452 2714 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:35:22.502562 kubelet[2714]: I0321 12:35:22.502549 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:35:22.562794 kubelet[2714]: I0321 12:35:22.561205 2714 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:22.566332 kubelet[2714]: I0321 12:35:22.566163 2714 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 21 12:35:22.566332 kubelet[2714]: I0321 12:35:22.566228 2714 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:35:22.570468 kubelet[2714]: I0321 12:35:22.570435 2714 topology_manager.go:215] "Topology Admit Handler" podUID="67eb3feb5e4f77626ae6242d7064bc42" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:35:22.570550 kubelet[2714]: I0321 12:35:22.570523 2714 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:35:22.570576 kubelet[2714]: I0321 12:35:22.570560 2714 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:35:22.664928 kubelet[2714]: I0321 12:35:22.664879 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:22.665036 kubelet[2714]: I0321 12:35:22.664934 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:22.665036 kubelet[2714]: I0321 12:35:22.664971 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:22.665036 kubelet[2714]: I0321 12:35:22.665011 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:35:22.665036 kubelet[2714]: I0321 12:35:22.665026 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:22.665147 kubelet[2714]: I0321 12:35:22.665042 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67eb3feb5e4f77626ae6242d7064bc42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67eb3feb5e4f77626ae6242d7064bc42\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:22.665147 kubelet[2714]: I0321 12:35:22.665057 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:22.665147 kubelet[2714]: I0321 12:35:22.665073 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:22.665147 kubelet[2714]: I0321 12:35:22.665088 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:22.889592 kubelet[2714]: E0321 12:35:22.889498 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:22.891881 kubelet[2714]: E0321 12:35:22.891853 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:22.892023 kubelet[2714]: E0321 12:35:22.892005 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:22.969575 sudo[2750]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 21 12:35:22.969887 sudo[2750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 21 12:35:23.408984 sudo[2750]: pam_unix(sudo:session): session closed for user root Mar 21 12:35:23.445918 kubelet[2714]: I0321 12:35:23.445886 2714 apiserver.go:52] "Watching apiserver" Mar 21 12:35:23.463827 kubelet[2714]: I0321 12:35:23.463789 2714 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:35:23.483658 kubelet[2714]: E0321 12:35:23.483611 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:23.485156 kubelet[2714]: E0321 12:35:23.484263 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:23.489167 kubelet[2714]: E0321 12:35:23.489117 2714 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:23.489608 kubelet[2714]: E0321 12:35:23.489576 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:23.514560 kubelet[2714]: I0321 12:35:23.514487 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.511577606 podStartE2EDuration="1.511577606s" podCreationTimestamp="2025-03-21 12:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:35:23.511428721 +0000 UTC m=+1.123544422" watchObservedRunningTime="2025-03-21 12:35:23.511577606 +0000 UTC m=+1.123693307" Mar 21 12:35:23.514925 kubelet[2714]: I0321 12:35:23.514788 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.51477806 podStartE2EDuration="1.51477806s" podCreationTimestamp="2025-03-21 12:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:35:23.503111882 +0000 UTC m=+1.115227583" watchObservedRunningTime="2025-03-21 12:35:23.51477806 +0000 UTC m=+1.126893761" Mar 21 12:35:23.529180 kubelet[2714]: I0321 12:35:23.528996 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.528984095 podStartE2EDuration="1.528984095s" podCreationTimestamp="2025-03-21 12:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:35:23.518555319 +0000 UTC m=+1.130671020" watchObservedRunningTime="2025-03-21 12:35:23.528984095 +0000 UTC m=+1.141099796" Mar 21 12:35:24.485882 kubelet[2714]: E0321 12:35:24.485853 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:25.331848 sudo[1692]: pam_unix(sudo:session): session closed for user root Mar 21 12:35:25.334248 sshd[1691]: Connection closed by 10.0.0.1 port 52554 Mar 21 12:35:25.334650 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:25.338715 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:52554.service: Deactivated successfully. Mar 21 12:35:25.342092 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 12:35:25.342991 systemd[1]: session-7.scope: Consumed 7.100s CPU time, 277.7M memory peak. Mar 21 12:35:25.344330 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Mar 21 12:35:25.345259 systemd-logind[1470]: Removed session 7. Mar 21 12:35:25.825016 kubelet[2714]: E0321 12:35:25.824879 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:27.941497 kubelet[2714]: E0321 12:35:27.941457 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:30.282587 kubelet[2714]: E0321 12:35:30.282544 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:30.493336 kubelet[2714]: E0321 12:35:30.493284 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:35.832256 kubelet[2714]: E0321 12:35:35.832119 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:36.695358 kubelet[2714]: I0321 12:35:36.695329 2714 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 12:35:36.698212 containerd[1489]: time="2025-03-21T12:35:36.698171077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 12:35:36.698566 kubelet[2714]: I0321 12:35:36.698378 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 12:35:37.349923 kubelet[2714]: I0321 12:35:37.349327 2714 topology_manager.go:215] "Topology Admit Handler" podUID="7e42ff63-4048-4b69-b6c8-ca5420e2e6b5" podNamespace="kube-system" podName="kube-proxy-w92fs" Mar 21 12:35:37.360723 systemd[1]: Created slice kubepods-besteffort-pod7e42ff63_4048_4b69_b6c8_ca5420e2e6b5.slice - libcontainer container kubepods-besteffort-pod7e42ff63_4048_4b69_b6c8_ca5420e2e6b5.slice. Mar 21 12:35:37.365064 kubelet[2714]: I0321 12:35:37.364869 2714 topology_manager.go:215] "Topology Admit Handler" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" podNamespace="kube-system" podName="cilium-x9l6h" Mar 21 12:35:37.365304 kubelet[2714]: I0321 12:35:37.365271 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-kube-proxy\") pod \"kube-proxy-w92fs\" (UID: \"7e42ff63-4048-4b69-b6c8-ca5420e2e6b5\") " pod="kube-system/kube-proxy-w92fs" Mar 21 12:35:37.365364 kubelet[2714]: I0321 12:35:37.365303 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-lib-modules\") pod \"kube-proxy-w92fs\" (UID: \"7e42ff63-4048-4b69-b6c8-ca5420e2e6b5\") " pod="kube-system/kube-proxy-w92fs" Mar 21 12:35:37.365364 kubelet[2714]: I0321 12:35:37.365357 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-xtables-lock\") pod \"kube-proxy-w92fs\" (UID: \"7e42ff63-4048-4b69-b6c8-ca5420e2e6b5\") " pod="kube-system/kube-proxy-w92fs" Mar 21 12:35:37.365409 kubelet[2714]: I0321 12:35:37.365379 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrdr\" (UniqueName: \"kubernetes.io/projected/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-kube-api-access-cxrdr\") pod \"kube-proxy-w92fs\" (UID: \"7e42ff63-4048-4b69-b6c8-ca5420e2e6b5\") " pod="kube-system/kube-proxy-w92fs" Mar 21 12:35:37.376571 systemd[1]: Created slice kubepods-burstable-pod4ee24c3c_a6df_43ce_89d8_011d5512230d.slice - libcontainer container kubepods-burstable-pod4ee24c3c_a6df_43ce_89d8_011d5512230d.slice. Mar 21 12:35:37.465637 kubelet[2714]: I0321 12:35:37.465580 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-lib-modules\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465637 kubelet[2714]: I0321 12:35:37.465624 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ee24c3c-a6df-43ce-89d8-011d5512230d-clustermesh-secrets\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465637 kubelet[2714]: I0321 12:35:37.465646 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-net\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465665 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-kernel\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465726 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-hostproc\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465758 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-hubble-tls\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465777 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-bpf-maps\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465792 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-config-path\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465863 kubelet[2714]: I0321 12:35:37.465817 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-cgroup\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465996 kubelet[2714]: I0321 12:35:37.465875 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-etc-cni-netd\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465996 kubelet[2714]: I0321 12:35:37.465893 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-xtables-lock\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465996 kubelet[2714]: I0321 12:35:37.465911 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsltj\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465996 kubelet[2714]: I0321 12:35:37.465946 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-run\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.465996 kubelet[2714]: I0321 12:35:37.465966 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cni-path\") pod \"cilium-x9l6h\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " pod="kube-system/cilium-x9l6h" Mar 21 12:35:37.476636 kubelet[2714]: E0321 12:35:37.476560 2714 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 21 12:35:37.476636 kubelet[2714]: E0321 12:35:37.476590 2714 projected.go:200] Error preparing data for projected volume kube-api-access-cxrdr for pod kube-system/kube-proxy-w92fs: configmap "kube-root-ca.crt" not found Mar 21 12:35:37.476789 kubelet[2714]: E0321 12:35:37.476644 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-kube-api-access-cxrdr podName:7e42ff63-4048-4b69-b6c8-ca5420e2e6b5 nodeName:}" failed. No retries permitted until 2025-03-21 12:35:37.976626424 +0000 UTC m=+15.588742125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cxrdr" (UniqueName: "kubernetes.io/projected/7e42ff63-4048-4b69-b6c8-ca5420e2e6b5-kube-api-access-cxrdr") pod "kube-proxy-w92fs" (UID: "7e42ff63-4048-4b69-b6c8-ca5420e2e6b5") : configmap "kube-root-ca.crt" not found Mar 21 12:35:37.577513 kubelet[2714]: E0321 12:35:37.577481 2714 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 21 12:35:37.577513 kubelet[2714]: E0321 12:35:37.577514 2714 projected.go:200] Error preparing data for projected volume kube-api-access-hsltj for pod kube-system/cilium-x9l6h: configmap "kube-root-ca.crt" not found Mar 21 12:35:37.577731 kubelet[2714]: E0321 12:35:37.577576 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj podName:4ee24c3c-a6df-43ce-89d8-011d5512230d nodeName:}" failed. No retries permitted until 2025-03-21 12:35:38.077549131 +0000 UTC m=+15.689664832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hsltj" (UniqueName: "kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj") pod "cilium-x9l6h" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d") : configmap "kube-root-ca.crt" not found Mar 21 12:35:37.795516 kubelet[2714]: I0321 12:35:37.795467 2714 topology_manager.go:215] "Topology Admit Handler" podUID="dbfc05fd-6caf-4011-b1b5-69d435f3baeb" podNamespace="kube-system" podName="cilium-operator-599987898-g8s4b" Mar 21 12:35:37.812693 systemd[1]: Created slice kubepods-besteffort-poddbfc05fd_6caf_4011_b1b5_69d435f3baeb.slice - libcontainer container kubepods-besteffort-poddbfc05fd_6caf_4011_b1b5_69d435f3baeb.slice. Mar 21 12:35:37.868216 kubelet[2714]: I0321 12:35:37.868172 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcvd\" (UniqueName: \"kubernetes.io/projected/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-kube-api-access-trcvd\") pod \"cilium-operator-599987898-g8s4b\" (UID: \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\") " pod="kube-system/cilium-operator-599987898-g8s4b" Mar 21 12:35:37.868216 kubelet[2714]: I0321 12:35:37.868215 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-cilium-config-path\") pod \"cilium-operator-599987898-g8s4b\" (UID: \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\") " pod="kube-system/cilium-operator-599987898-g8s4b" Mar 21 12:35:37.948998 kubelet[2714]: E0321 12:35:37.948910 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.116205 kubelet[2714]: E0321 12:35:38.115853 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.118467 containerd[1489]: time="2025-03-21T12:35:38.118416454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8s4b,Uid:dbfc05fd-6caf-4011-b1b5-69d435f3baeb,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:38.160527 containerd[1489]: time="2025-03-21T12:35:38.160485948Z" level=info msg="connecting to shim 307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a" address="unix:///run/containerd/s/0e4bf208e10e12022b5c56accc7a6b1bdad53708bab4a82bd15e8baed068aee5" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:38.188004 systemd[1]: Started cri-containerd-307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a.scope - libcontainer container 307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a. Mar 21 12:35:38.224337 containerd[1489]: time="2025-03-21T12:35:38.224193602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8s4b,Uid:dbfc05fd-6caf-4011-b1b5-69d435f3baeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\"" Mar 21 12:35:38.226720 kubelet[2714]: E0321 12:35:38.226691 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.229210 containerd[1489]: time="2025-03-21T12:35:38.229166272Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 21 12:35:38.270524 kubelet[2714]: E0321 12:35:38.270482 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.271017 containerd[1489]: time="2025-03-21T12:35:38.270971482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w92fs,Uid:7e42ff63-4048-4b69-b6c8-ca5420e2e6b5,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:38.280462 kubelet[2714]: E0321 12:35:38.280434 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.280996 containerd[1489]: time="2025-03-21T12:35:38.280815430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x9l6h,Uid:4ee24c3c-a6df-43ce-89d8-011d5512230d,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:38.286618 containerd[1489]: time="2025-03-21T12:35:38.286569510Z" level=info msg="connecting to shim 54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84" address="unix:///run/containerd/s/ccd5c98757701a56b7d60ec0c93f52825de390a409103dadf52ce9570206744e" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:38.294227 containerd[1489]: time="2025-03-21T12:35:38.294194189Z" level=info msg="connecting to shim 4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:38.309893 systemd[1]: Started cri-containerd-54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84.scope - libcontainer container 54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84. Mar 21 12:35:38.313100 systemd[1]: Started cri-containerd-4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8.scope - libcontainer container 4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8. Mar 21 12:35:38.335659 containerd[1489]: time="2025-03-21T12:35:38.335621598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w92fs,Uid:7e42ff63-4048-4b69-b6c8-ca5420e2e6b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84\"" Mar 21 12:35:38.336182 kubelet[2714]: E0321 12:35:38.336162 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.339001 containerd[1489]: time="2025-03-21T12:35:38.338964267Z" level=info msg="CreateContainer within sandbox \"54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 12:35:38.339611 containerd[1489]: time="2025-03-21T12:35:38.339581864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x9l6h,Uid:4ee24c3c-a6df-43ce-89d8-011d5512230d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\"" Mar 21 12:35:38.340579 kubelet[2714]: E0321 12:35:38.340388 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:38.386815 containerd[1489]: time="2025-03-21T12:35:38.386703374Z" level=info msg="Container bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:38.393183 containerd[1489]: time="2025-03-21T12:35:38.393132630Z" level=info msg="CreateContainer within sandbox \"54acdec7d4098e48175e69ce03b385e89772be7c3a8b335a1b3eaad20a0aaf84\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23\"" Mar 21 12:35:38.395797 containerd[1489]: time="2025-03-21T12:35:38.395764792Z" level=info msg="StartContainer for \"bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23\"" Mar 21 12:35:38.397123 containerd[1489]: time="2025-03-21T12:35:38.397092456Z" level=info msg="connecting to shim bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23" address="unix:///run/containerd/s/ccd5c98757701a56b7d60ec0c93f52825de390a409103dadf52ce9570206744e" protocol=ttrpc version=3 Mar 21 12:35:38.415917 systemd[1]: Started cri-containerd-bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23.scope - libcontainer container bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23. Mar 21 12:35:38.449913 containerd[1489]: time="2025-03-21T12:35:38.449869615Z" level=info msg="StartContainer for \"bdadcdfbcafcf66779f05bed302407de257b097634f3b3564e465c597a1ecd23\" returns successfully" Mar 21 12:35:38.516082 kubelet[2714]: E0321 12:35:38.516051 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:39.577685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246332497.mount: Deactivated successfully. Mar 21 12:35:40.144582 update_engine[1472]: I20250321 12:35:40.144498 1472 update_attempter.cc:509] Updating boot flags... Mar 21 12:35:40.165830 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3038) Mar 21 12:35:40.202776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3102) Mar 21 12:35:42.500852 kubelet[2714]: I0321 12:35:42.500787 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w92fs" podStartSLOduration=5.500771827 podStartE2EDuration="5.500771827s" podCreationTimestamp="2025-03-21 12:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:35:38.533317862 +0000 UTC m=+16.145433603" watchObservedRunningTime="2025-03-21 12:35:42.500771827 +0000 UTC m=+20.112887568" Mar 21 12:35:43.424642 containerd[1489]: time="2025-03-21T12:35:43.424558120Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:43.425194 containerd[1489]: time="2025-03-21T12:35:43.425127092Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 21 12:35:43.425914 containerd[1489]: time="2025-03-21T12:35:43.425884867Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:43.428268 containerd[1489]: time="2025-03-21T12:35:43.428234811Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.199021444s" Mar 21 12:35:43.428344 containerd[1489]: time="2025-03-21T12:35:43.428274741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 21 12:35:43.435532 containerd[1489]: time="2025-03-21T12:35:43.433905885Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 21 12:35:43.435532 containerd[1489]: time="2025-03-21T12:35:43.435097041Z" level=info msg="CreateContainer within sandbox \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 21 12:35:43.514014 containerd[1489]: time="2025-03-21T12:35:43.513976631Z" level=info msg="Container b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:43.519640 containerd[1489]: time="2025-03-21T12:35:43.519511833Z" level=info msg="CreateContainer within sandbox \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\"" Mar 21 12:35:43.520139 containerd[1489]: time="2025-03-21T12:35:43.520037635Z" level=info msg="StartContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\"" Mar 21 12:35:43.522150 containerd[1489]: time="2025-03-21T12:35:43.522123638Z" level=info msg="connecting to shim b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027" address="unix:///run/containerd/s/0e4bf208e10e12022b5c56accc7a6b1bdad53708bab4a82bd15e8baed068aee5" protocol=ttrpc version=3 Mar 21 12:35:43.559891 systemd[1]: Started cri-containerd-b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027.scope - libcontainer container b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027. Mar 21 12:35:43.590882 containerd[1489]: time="2025-03-21T12:35:43.589350890Z" level=info msg="StartContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" returns successfully" Mar 21 12:35:44.545142 kubelet[2714]: E0321 12:35:44.544939 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:44.553424 kubelet[2714]: I0321 12:35:44.553292 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-g8s4b" podStartSLOduration=2.350542466 podStartE2EDuration="7.553274801s" podCreationTimestamp="2025-03-21 12:35:37 +0000 UTC" firstStartedPulling="2025-03-21 12:35:38.227497619 +0000 UTC m=+15.839613320" lastFinishedPulling="2025-03-21 12:35:43.430229954 +0000 UTC m=+21.042345655" observedRunningTime="2025-03-21 12:35:44.553141572 +0000 UTC m=+22.165257273" watchObservedRunningTime="2025-03-21 12:35:44.553274801 +0000 UTC m=+22.165390502" Mar 21 12:35:45.548689 kubelet[2714]: E0321 12:35:45.548650 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:49.059281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63761652.mount: Deactivated successfully. Mar 21 12:35:49.488969 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:60566.service - OpenSSH per-connection server daemon (10.0.0.1:60566). Mar 21 12:35:49.556621 sshd[3159]: Accepted publickey for core from 10.0.0.1 port 60566 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:35:49.557857 sshd-session[3159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:49.563234 systemd-logind[1470]: New session 8 of user core. Mar 21 12:35:49.566914 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 12:35:49.727385 sshd[3161]: Connection closed by 10.0.0.1 port 60566 Mar 21 12:35:49.729261 sshd-session[3159]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:49.733138 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:60566.service: Deactivated successfully. Mar 21 12:35:49.735300 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 12:35:49.737139 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Mar 21 12:35:49.738648 systemd-logind[1470]: Removed session 8. Mar 21 12:35:50.655226 containerd[1489]: time="2025-03-21T12:35:50.654834218Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:50.655607 containerd[1489]: time="2025-03-21T12:35:50.655558685Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 21 12:35:50.656223 containerd[1489]: time="2025-03-21T12:35:50.656166375Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:50.657564 containerd[1489]: time="2025-03-21T12:35:50.657473088Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.223520992s" Mar 21 12:35:50.657564 containerd[1489]: time="2025-03-21T12:35:50.657507533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 21 12:35:50.662517 containerd[1489]: time="2025-03-21T12:35:50.659629686Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 12:35:50.676891 containerd[1489]: time="2025-03-21T12:35:50.676199409Z" level=info msg="Container d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:50.677417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662179570.mount: Deactivated successfully. Mar 21 12:35:50.679495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533746375.mount: Deactivated successfully. Mar 21 12:35:50.682053 containerd[1489]: time="2025-03-21T12:35:50.681965379Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\"" Mar 21 12:35:50.683325 containerd[1489]: time="2025-03-21T12:35:50.683092025Z" level=info msg="StartContainer for \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\"" Mar 21 12:35:50.684154 containerd[1489]: time="2025-03-21T12:35:50.684121417Z" level=info msg="connecting to shim d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" protocol=ttrpc version=3 Mar 21 12:35:50.708894 systemd[1]: Started cri-containerd-d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b.scope - libcontainer container d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b. Mar 21 12:35:50.731560 containerd[1489]: time="2025-03-21T12:35:50.731526486Z" level=info msg="StartContainer for \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" returns successfully" Mar 21 12:35:50.781195 systemd[1]: cri-containerd-d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b.scope: Deactivated successfully. Mar 21 12:35:50.781462 systemd[1]: cri-containerd-d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b.scope: Consumed 59ms CPU time, 6.7M memory peak, 44K read from disk, 3.1M written to disk. Mar 21 12:35:50.806499 containerd[1489]: time="2025-03-21T12:35:50.806445252Z" level=info msg="received exit event container_id:\"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" id:\"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" pid:3204 exited_at:{seconds:1742560550 nanos:797062469}" Mar 21 12:35:50.806625 containerd[1489]: time="2025-03-21T12:35:50.806525704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" id:\"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" pid:3204 exited_at:{seconds:1742560550 nanos:797062469}" Mar 21 12:35:51.557987 kubelet[2714]: E0321 12:35:51.556740 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:51.562697 containerd[1489]: time="2025-03-21T12:35:51.562067490Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 12:35:51.572306 containerd[1489]: time="2025-03-21T12:35:51.572262699Z" level=info msg="Container ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:51.576791 containerd[1489]: time="2025-03-21T12:35:51.576741999Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\"" Mar 21 12:35:51.577253 containerd[1489]: time="2025-03-21T12:35:51.577132933Z" level=info msg="StartContainer for \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\"" Mar 21 12:35:51.577980 containerd[1489]: time="2025-03-21T12:35:51.577950486Z" level=info msg="connecting to shim ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" protocol=ttrpc version=3 Mar 21 12:35:51.599887 systemd[1]: Started cri-containerd-ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70.scope - libcontainer container ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70. Mar 21 12:35:51.629534 containerd[1489]: time="2025-03-21T12:35:51.629499371Z" level=info msg="StartContainer for \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" returns successfully" Mar 21 12:35:51.648609 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:35:51.648855 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:35:51.649016 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:35:51.650441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:35:51.650939 systemd[1]: cri-containerd-ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70.scope: Deactivated successfully. Mar 21 12:35:51.651502 containerd[1489]: time="2025-03-21T12:35:51.651473129Z" level=info msg="received exit event container_id:\"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" id:\"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" pid:3249 exited_at:{seconds:1742560551 nanos:651162206}" Mar 21 12:35:51.652932 containerd[1489]: time="2025-03-21T12:35:51.652899566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" id:\"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" pid:3249 exited_at:{seconds:1742560551 nanos:651162206}" Mar 21 12:35:51.668273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b-rootfs.mount: Deactivated successfully. Mar 21 12:35:51.675881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:35:52.559584 kubelet[2714]: E0321 12:35:52.559514 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:52.563218 containerd[1489]: time="2025-03-21T12:35:52.562689310Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 12:35:52.571545 containerd[1489]: time="2025-03-21T12:35:52.570406751Z" level=info msg="Container d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:52.580634 containerd[1489]: time="2025-03-21T12:35:52.580589670Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\"" Mar 21 12:35:52.585217 containerd[1489]: time="2025-03-21T12:35:52.585178865Z" level=info msg="StartContainer for \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\"" Mar 21 12:35:52.586526 containerd[1489]: time="2025-03-21T12:35:52.586488715Z" level=info msg="connecting to shim d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" protocol=ttrpc version=3 Mar 21 12:35:52.606912 systemd[1]: Started cri-containerd-d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc.scope - libcontainer container d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc. Mar 21 12:35:52.641199 containerd[1489]: time="2025-03-21T12:35:52.641154959Z" level=info msg="StartContainer for \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" returns successfully" Mar 21 12:35:52.652675 systemd[1]: cri-containerd-d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc.scope: Deactivated successfully. Mar 21 12:35:52.653914 containerd[1489]: time="2025-03-21T12:35:52.653742110Z" level=info msg="received exit event container_id:\"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" id:\"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" pid:3298 exited_at:{seconds:1742560552 nanos:653553726}" Mar 21 12:35:52.653993 containerd[1489]: time="2025-03-21T12:35:52.653937815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" id:\"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" pid:3298 exited_at:{seconds:1742560552 nanos:653553726}" Mar 21 12:35:52.672559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc-rootfs.mount: Deactivated successfully. Mar 21 12:35:53.564444 kubelet[2714]: E0321 12:35:53.564379 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:53.568784 containerd[1489]: time="2025-03-21T12:35:53.568217345Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 12:35:53.577407 containerd[1489]: time="2025-03-21T12:35:53.577357256Z" level=info msg="Container 4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:53.590202 containerd[1489]: time="2025-03-21T12:35:53.590072520Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\"" Mar 21 12:35:53.591559 containerd[1489]: time="2025-03-21T12:35:53.591527977Z" level=info msg="StartContainer for \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\"" Mar 21 12:35:53.592484 containerd[1489]: time="2025-03-21T12:35:53.592460171Z" level=info msg="connecting to shim 4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" protocol=ttrpc version=3 Mar 21 12:35:53.613932 systemd[1]: Started cri-containerd-4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a.scope - libcontainer container 4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a. Mar 21 12:35:53.640393 systemd[1]: cri-containerd-4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a.scope: Deactivated successfully. Mar 21 12:35:53.649607 containerd[1489]: time="2025-03-21T12:35:53.649559148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" id:\"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" pid:3336 exited_at:{seconds:1742560553 nanos:641085638}" Mar 21 12:35:53.649920 containerd[1489]: time="2025-03-21T12:35:53.649682443Z" level=info msg="StartContainer for \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" returns successfully" Mar 21 12:35:53.649920 containerd[1489]: time="2025-03-21T12:35:53.649814579Z" level=info msg="received exit event container_id:\"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" id:\"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" pid:3336 exited_at:{seconds:1742560553 nanos:641085638}" Mar 21 12:35:53.667269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a-rootfs.mount: Deactivated successfully. Mar 21 12:35:54.570465 kubelet[2714]: E0321 12:35:54.569572 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:54.572933 containerd[1489]: time="2025-03-21T12:35:54.572890500Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 12:35:54.582757 containerd[1489]: time="2025-03-21T12:35:54.582382421Z" level=info msg="Container d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:54.590008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381504085.mount: Deactivated successfully. Mar 21 12:35:54.593799 containerd[1489]: time="2025-03-21T12:35:54.593736914Z" level=info msg="CreateContainer within sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\"" Mar 21 12:35:54.594697 containerd[1489]: time="2025-03-21T12:35:54.594477359Z" level=info msg="StartContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\"" Mar 21 12:35:54.595587 containerd[1489]: time="2025-03-21T12:35:54.595532599Z" level=info msg="connecting to shim d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961" address="unix:///run/containerd/s/8fedf54fa838df0c90b9bfdd39baacffe8a78c4dafa9e04de79bb6d47be54ffe" protocol=ttrpc version=3 Mar 21 12:35:54.617933 systemd[1]: Started cri-containerd-d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961.scope - libcontainer container d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961. Mar 21 12:35:54.652180 containerd[1489]: time="2025-03-21T12:35:54.652142247Z" level=info msg="StartContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" returns successfully" Mar 21 12:35:54.743168 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:33560.service - OpenSSH per-connection server daemon (10.0.0.1:33560). Mar 21 12:35:54.769442 containerd[1489]: time="2025-03-21T12:35:54.768065370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" id:\"63a8c23657b78a494fe828fe010379f8445b767f26e1df337d8eabd0ec54062f\" pid:3404 exited_at:{seconds:1742560554 nanos:766455707}" Mar 21 12:35:54.823993 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:35:54.825362 sshd-session[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:54.831154 systemd-logind[1470]: New session 9 of user core. Mar 21 12:35:54.842900 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 12:35:54.843623 kubelet[2714]: I0321 12:35:54.843587 2714 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 21 12:35:54.870085 kubelet[2714]: I0321 12:35:54.870040 2714 topology_manager.go:215] "Topology Admit Handler" podUID="33eb328f-62bc-4eb7-b619-1793d03ad0b6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6v6z9" Mar 21 12:35:54.870480 kubelet[2714]: I0321 12:35:54.870206 2714 topology_manager.go:215] "Topology Admit Handler" podUID="4949f24b-9bc0-412b-b126-f660f272abaa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9xz5w" Mar 21 12:35:54.879142 systemd[1]: Created slice kubepods-burstable-pod33eb328f_62bc_4eb7_b619_1793d03ad0b6.slice - libcontainer container kubepods-burstable-pod33eb328f_62bc_4eb7_b619_1793d03ad0b6.slice. Mar 21 12:35:54.884754 systemd[1]: Created slice kubepods-burstable-pod4949f24b_9bc0_412b_b126_f660f272abaa.slice - libcontainer container kubepods-burstable-pod4949f24b_9bc0_412b_b126_f660f272abaa.slice. Mar 21 12:35:54.888215 kubelet[2714]: I0321 12:35:54.888183 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4th7h\" (UniqueName: \"kubernetes.io/projected/4949f24b-9bc0-412b-b126-f660f272abaa-kube-api-access-4th7h\") pod \"coredns-7db6d8ff4d-9xz5w\" (UID: \"4949f24b-9bc0-412b-b126-f660f272abaa\") " pod="kube-system/coredns-7db6d8ff4d-9xz5w" Mar 21 12:35:54.888330 kubelet[2714]: I0321 12:35:54.888225 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4949f24b-9bc0-412b-b126-f660f272abaa-config-volume\") pod \"coredns-7db6d8ff4d-9xz5w\" (UID: \"4949f24b-9bc0-412b-b126-f660f272abaa\") " pod="kube-system/coredns-7db6d8ff4d-9xz5w" Mar 21 12:35:54.888330 kubelet[2714]: I0321 12:35:54.888247 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33eb328f-62bc-4eb7-b619-1793d03ad0b6-config-volume\") pod \"coredns-7db6d8ff4d-6v6z9\" (UID: \"33eb328f-62bc-4eb7-b619-1793d03ad0b6\") " pod="kube-system/coredns-7db6d8ff4d-6v6z9" Mar 21 12:35:54.888330 kubelet[2714]: I0321 12:35:54.888263 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsb8\" (UniqueName: \"kubernetes.io/projected/33eb328f-62bc-4eb7-b619-1793d03ad0b6-kube-api-access-pjsb8\") pod \"coredns-7db6d8ff4d-6v6z9\" (UID: \"33eb328f-62bc-4eb7-b619-1793d03ad0b6\") " pod="kube-system/coredns-7db6d8ff4d-6v6z9" Mar 21 12:35:54.974363 sshd[3433]: Connection closed by 10.0.0.1 port 33560 Mar 21 12:35:54.974955 sshd-session[3422]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:54.979450 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:33560.service: Deactivated successfully. Mar 21 12:35:54.981789 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 12:35:54.982648 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Mar 21 12:35:54.983527 systemd-logind[1470]: Removed session 9. Mar 21 12:35:55.183141 kubelet[2714]: E0321 12:35:55.183085 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.184278 containerd[1489]: time="2025-03-21T12:35:55.184167338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v6z9,Uid:33eb328f-62bc-4eb7-b619-1793d03ad0b6,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:55.187626 kubelet[2714]: E0321 12:35:55.187602 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.191709 containerd[1489]: time="2025-03-21T12:35:55.191529484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xz5w,Uid:4949f24b-9bc0-412b-b126-f660f272abaa,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:55.576824 kubelet[2714]: E0321 12:35:55.576363 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.593185 kubelet[2714]: I0321 12:35:55.592563 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x9l6h" podStartSLOduration=6.275089703 podStartE2EDuration="18.592547185s" podCreationTimestamp="2025-03-21 12:35:37 +0000 UTC" firstStartedPulling="2025-03-21 12:35:38.340961985 +0000 UTC m=+15.953077686" lastFinishedPulling="2025-03-21 12:35:50.658419467 +0000 UTC m=+28.270535168" observedRunningTime="2025-03-21 12:35:55.59090185 +0000 UTC m=+33.203017591" watchObservedRunningTime="2025-03-21 12:35:55.592547185 +0000 UTC m=+33.204662886" Mar 21 12:35:56.578130 kubelet[2714]: E0321 12:35:56.578089 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:56.951655 systemd-networkd[1431]: cilium_host: Link UP Mar 21 12:35:56.951805 systemd-networkd[1431]: cilium_net: Link UP Mar 21 12:35:56.951809 systemd-networkd[1431]: cilium_net: Gained carrier Mar 21 12:35:56.951936 systemd-networkd[1431]: cilium_host: Gained carrier Mar 21 12:35:56.952680 systemd-networkd[1431]: cilium_host: Gained IPv6LL Mar 21 12:35:57.042627 systemd-networkd[1431]: cilium_vxlan: Link UP Mar 21 12:35:57.042637 systemd-networkd[1431]: cilium_vxlan: Gained carrier Mar 21 12:35:57.350867 kernel: NET: Registered PF_ALG protocol family Mar 21 12:35:57.541924 systemd-networkd[1431]: cilium_net: Gained IPv6LL Mar 21 12:35:57.580515 kubelet[2714]: E0321 12:35:57.580454 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:57.928246 systemd-networkd[1431]: lxc_health: Link UP Mar 21 12:35:57.934346 systemd-networkd[1431]: lxc_health: Gained carrier Mar 21 12:35:58.295623 kernel: eth0: renamed from tmp9dd1c Mar 21 12:35:58.297942 systemd-networkd[1431]: lxc32faf9b36d04: Link UP Mar 21 12:35:58.303670 systemd-networkd[1431]: lxc145f4c7ff0ac: Link UP Mar 21 12:35:58.312338 systemd-networkd[1431]: lxc32faf9b36d04: Gained carrier Mar 21 12:35:58.318798 kernel: eth0: renamed from tmp79a3a Mar 21 12:35:58.325229 systemd-networkd[1431]: lxc145f4c7ff0ac: Gained carrier Mar 21 12:35:58.581843 kubelet[2714]: E0321 12:35:58.581720 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:58.885890 systemd-networkd[1431]: cilium_vxlan: Gained IPv6LL Mar 21 12:35:59.398873 systemd-networkd[1431]: lxc32faf9b36d04: Gained IPv6LL Mar 21 12:35:59.584903 kubelet[2714]: E0321 12:35:59.584707 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:59.909886 systemd-networkd[1431]: lxc_health: Gained IPv6LL Mar 21 12:35:59.989353 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:33566.service - OpenSSH per-connection server daemon (10.0.0.1:33566). Mar 21 12:36:00.057855 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 33566 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:00.059922 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:00.065126 systemd-logind[1470]: New session 10 of user core. Mar 21 12:36:00.073982 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 12:36:00.166829 systemd-networkd[1431]: lxc145f4c7ff0ac: Gained IPv6LL Mar 21 12:36:00.234667 sshd[3902]: Connection closed by 10.0.0.1 port 33566 Mar 21 12:36:00.235250 sshd-session[3900]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:00.240252 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:33566.service: Deactivated successfully. Mar 21 12:36:00.244019 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 12:36:00.245342 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Mar 21 12:36:00.246279 systemd-logind[1470]: Removed session 10. Mar 21 12:36:00.586208 kubelet[2714]: E0321 12:36:00.586084 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:01.992686 containerd[1489]: time="2025-03-21T12:36:01.992631291Z" level=info msg="connecting to shim 9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c" address="unix:///run/containerd/s/1f2cbafcb3bc4495f5f5a8e6e7f6df228f99e4d03562d00f8c141c37d7451987" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:01.994928 containerd[1489]: time="2025-03-21T12:36:01.994896667Z" level=info msg="connecting to shim 79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d" address="unix:///run/containerd/s/776bbb143e22085b9de711b9d24fdb1498e05af5dd185a85851774d525ae5de0" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:02.024907 systemd[1]: Started cri-containerd-9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c.scope - libcontainer container 9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c. Mar 21 12:36:02.027368 systemd[1]: Started cri-containerd-79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d.scope - libcontainer container 79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d. Mar 21 12:36:02.040704 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:36:02.041431 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:36:02.065259 containerd[1489]: time="2025-03-21T12:36:02.064321125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xz5w,Uid:4949f24b-9bc0-412b-b126-f660f272abaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d\"" Mar 21 12:36:02.065390 kubelet[2714]: E0321 12:36:02.065200 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:02.067820 containerd[1489]: time="2025-03-21T12:36:02.067770647Z" level=info msg="CreateContainer within sandbox \"79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:36:02.080465 containerd[1489]: time="2025-03-21T12:36:02.079668420Z" level=info msg="Container c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:02.084978 containerd[1489]: time="2025-03-21T12:36:02.084933247Z" level=info msg="CreateContainer within sandbox \"79a3a54f330cb320e7e6a5ba140e6509f88e7bbf30f8f53223dc8954ea09f92d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7\"" Mar 21 12:36:02.085461 containerd[1489]: time="2025-03-21T12:36:02.085440077Z" level=info msg="StartContainer for \"c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7\"" Mar 21 12:36:02.087796 containerd[1489]: time="2025-03-21T12:36:02.087763412Z" level=info msg="connecting to shim c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7" address="unix:///run/containerd/s/776bbb143e22085b9de711b9d24fdb1498e05af5dd185a85851774d525ae5de0" protocol=ttrpc version=3 Mar 21 12:36:02.106956 systemd[1]: Started cri-containerd-c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7.scope - libcontainer container c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7. Mar 21 12:36:02.119099 containerd[1489]: time="2025-03-21T12:36:02.118959831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v6z9,Uid:33eb328f-62bc-4eb7-b619-1793d03ad0b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c\"" Mar 21 12:36:02.119856 kubelet[2714]: E0321 12:36:02.119832 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:02.125179 containerd[1489]: time="2025-03-21T12:36:02.125136431Z" level=info msg="CreateContainer within sandbox \"9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:36:02.134656 containerd[1489]: time="2025-03-21T12:36:02.134593142Z" level=info msg="Container 5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:02.140018 containerd[1489]: time="2025-03-21T12:36:02.139891931Z" level=info msg="StartContainer for \"c155fdebcaec152d90dddc6fdb9e412d776b0b021571db8591dda0e99077aac7\" returns successfully" Mar 21 12:36:02.140952 containerd[1489]: time="2025-03-21T12:36:02.140820945Z" level=info msg="CreateContainer within sandbox \"9dd1cf5d5d9adfc8984c81f26205b24393b7472dda47a3469ba59e413a54f99c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7\"" Mar 21 12:36:02.141605 containerd[1489]: time="2025-03-21T12:36:02.141579950Z" level=info msg="StartContainer for \"5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7\"" Mar 21 12:36:02.142733 containerd[1489]: time="2025-03-21T12:36:02.142682814Z" level=info msg="connecting to shim 5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7" address="unix:///run/containerd/s/1f2cbafcb3bc4495f5f5a8e6e7f6df228f99e4d03562d00f8c141c37d7451987" protocol=ttrpc version=3 Mar 21 12:36:02.163937 systemd[1]: Started cri-containerd-5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7.scope - libcontainer container 5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7. Mar 21 12:36:02.204618 containerd[1489]: time="2025-03-21T12:36:02.204582303Z" level=info msg="StartContainer for \"5d738bef4651ba736353dff70da3d89762a379593eaefc19e314c8eb58e44ef7\" returns successfully" Mar 21 12:36:02.594547 kubelet[2714]: E0321 12:36:02.594473 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:02.603042 kubelet[2714]: I0321 12:36:02.602897 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6v6z9" podStartSLOduration=25.602782358 podStartE2EDuration="25.602782358s" podCreationTimestamp="2025-03-21 12:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:02.602717354 +0000 UTC m=+40.214833055" watchObservedRunningTime="2025-03-21 12:36:02.602782358 +0000 UTC m=+40.214898059" Mar 21 12:36:02.612239 kubelet[2714]: E0321 12:36:02.612201 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:02.629372 kubelet[2714]: I0321 12:36:02.628938 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9xz5w" podStartSLOduration=25.628919722 podStartE2EDuration="25.628919722s" podCreationTimestamp="2025-03-21 12:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:02.628859158 +0000 UTC m=+40.240974859" watchObservedRunningTime="2025-03-21 12:36:02.628919722 +0000 UTC m=+40.241035423" Mar 21 12:36:02.971842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573433362.mount: Deactivated successfully. Mar 21 12:36:03.613126 kubelet[2714]: E0321 12:36:03.613089 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:03.613894 kubelet[2714]: E0321 12:36:03.613152 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:04.619913 kubelet[2714]: E0321 12:36:04.619887 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:05.249109 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:37758.service - OpenSSH per-connection server daemon (10.0.0.1:37758). Mar 21 12:36:05.299755 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 37758 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:05.301198 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:05.304934 systemd-logind[1470]: New session 11 of user core. Mar 21 12:36:05.323908 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 12:36:05.434679 sshd[4092]: Connection closed by 10.0.0.1 port 37758 Mar 21 12:36:05.435032 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:05.437535 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:37758.service: Deactivated successfully. Mar 21 12:36:05.439406 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 12:36:05.440879 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Mar 21 12:36:05.441846 systemd-logind[1470]: Removed session 11. Mar 21 12:36:10.447694 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Mar 21 12:36:10.497272 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:10.498594 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:10.503119 systemd-logind[1470]: New session 12 of user core. Mar 21 12:36:10.515903 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 12:36:10.625815 sshd[4111]: Connection closed by 10.0.0.1 port 37772 Mar 21 12:36:10.626593 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:10.638143 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:37772.service: Deactivated successfully. Mar 21 12:36:10.639683 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 12:36:10.640441 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Mar 21 12:36:10.642344 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:37774.service - OpenSSH per-connection server daemon (10.0.0.1:37774). Mar 21 12:36:10.643152 systemd-logind[1470]: Removed session 12. Mar 21 12:36:10.691554 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 37774 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:10.692715 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:10.696673 systemd-logind[1470]: New session 13 of user core. Mar 21 12:36:10.708881 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 12:36:10.856571 sshd[4127]: Connection closed by 10.0.0.1 port 37774 Mar 21 12:36:10.857173 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:10.868836 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:37774.service: Deactivated successfully. Mar 21 12:36:10.871497 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 12:36:10.873656 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Mar 21 12:36:10.875431 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:37786.service - OpenSSH per-connection server daemon (10.0.0.1:37786). Mar 21 12:36:10.877849 systemd-logind[1470]: Removed session 13. Mar 21 12:36:10.931297 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 37786 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:10.932528 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:10.937383 systemd-logind[1470]: New session 14 of user core. Mar 21 12:36:10.949915 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 12:36:11.060543 sshd[4141]: Connection closed by 10.0.0.1 port 37786 Mar 21 12:36:11.061001 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:11.064125 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:37786.service: Deactivated successfully. Mar 21 12:36:11.066407 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 12:36:11.067572 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Mar 21 12:36:11.068390 systemd-logind[1470]: Removed session 14. Mar 21 12:36:16.075629 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:46976.service - OpenSSH per-connection server daemon (10.0.0.1:46976). Mar 21 12:36:16.115785 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 46976 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:16.117106 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:16.121069 systemd-logind[1470]: New session 15 of user core. Mar 21 12:36:16.127910 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 12:36:16.238610 sshd[4158]: Connection closed by 10.0.0.1 port 46976 Mar 21 12:36:16.238601 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:16.241710 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:46976.service: Deactivated successfully. Mar 21 12:36:16.243640 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 12:36:16.244388 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Mar 21 12:36:16.245207 systemd-logind[1470]: Removed session 15. Mar 21 12:36:21.250114 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:46984.service - OpenSSH per-connection server daemon (10.0.0.1:46984). Mar 21 12:36:21.299118 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 46984 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:21.300255 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:21.307092 systemd-logind[1470]: New session 16 of user core. Mar 21 12:36:21.321486 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 12:36:21.435144 sshd[4174]: Connection closed by 10.0.0.1 port 46984 Mar 21 12:36:21.435883 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:21.449878 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:46984.service: Deactivated successfully. Mar 21 12:36:21.451316 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 12:36:21.452172 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Mar 21 12:36:21.453886 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:46986.service - OpenSSH per-connection server daemon (10.0.0.1:46986). Mar 21 12:36:21.454693 systemd-logind[1470]: Removed session 16. Mar 21 12:36:21.499358 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 46986 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:21.500494 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:21.504908 systemd-logind[1470]: New session 17 of user core. Mar 21 12:36:21.508900 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 12:36:21.715967 sshd[4190]: Connection closed by 10.0.0.1 port 46986 Mar 21 12:36:21.716657 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:21.724780 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:46986.service: Deactivated successfully. Mar 21 12:36:21.726294 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 12:36:21.727617 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Mar 21 12:36:21.728759 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:46988.service - OpenSSH per-connection server daemon (10.0.0.1:46988). Mar 21 12:36:21.729685 systemd-logind[1470]: Removed session 17. Mar 21 12:36:21.781632 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 46988 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:21.782831 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:21.786796 systemd-logind[1470]: New session 18 of user core. Mar 21 12:36:21.794020 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 12:36:23.055450 sshd[4203]: Connection closed by 10.0.0.1 port 46988 Mar 21 12:36:23.056050 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:23.069421 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:46988.service: Deactivated successfully. Mar 21 12:36:23.071374 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 12:36:23.073153 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Mar 21 12:36:23.074510 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:45476.service - OpenSSH per-connection server daemon (10.0.0.1:45476). Mar 21 12:36:23.079018 systemd-logind[1470]: Removed session 18. Mar 21 12:36:23.129776 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 45476 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:23.131054 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:23.134806 systemd-logind[1470]: New session 19 of user core. Mar 21 12:36:23.140886 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 12:36:23.347845 sshd[4229]: Connection closed by 10.0.0.1 port 45476 Mar 21 12:36:23.347345 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:23.357977 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:45476.service: Deactivated successfully. Mar 21 12:36:23.359950 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 12:36:23.360764 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Mar 21 12:36:23.363617 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:45478.service - OpenSSH per-connection server daemon (10.0.0.1:45478). Mar 21 12:36:23.364137 systemd-logind[1470]: Removed session 19. Mar 21 12:36:23.419260 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 45478 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:23.420968 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:23.426981 systemd-logind[1470]: New session 20 of user core. Mar 21 12:36:23.436004 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 12:36:23.542697 sshd[4243]: Connection closed by 10.0.0.1 port 45478 Mar 21 12:36:23.542185 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:23.545672 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:45478.service: Deactivated successfully. Mar 21 12:36:23.548492 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 12:36:23.549299 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Mar 21 12:36:23.550109 systemd-logind[1470]: Removed session 20. Mar 21 12:36:28.554256 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:45492.service - OpenSSH per-connection server daemon (10.0.0.1:45492). Mar 21 12:36:28.605638 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 45492 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:28.606843 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:28.610419 systemd-logind[1470]: New session 21 of user core. Mar 21 12:36:28.617891 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 21 12:36:28.722596 sshd[4261]: Connection closed by 10.0.0.1 port 45492 Mar 21 12:36:28.722982 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:28.726410 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:45492.service: Deactivated successfully. Mar 21 12:36:28.728225 systemd[1]: session-21.scope: Deactivated successfully. Mar 21 12:36:28.728895 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Mar 21 12:36:28.729572 systemd-logind[1470]: Removed session 21. Mar 21 12:36:33.739246 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:52340.service - OpenSSH per-connection server daemon (10.0.0.1:52340). Mar 21 12:36:33.823202 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 52340 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:33.824825 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:33.831021 systemd-logind[1470]: New session 22 of user core. Mar 21 12:36:33.843953 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 21 12:36:33.966714 sshd[4277]: Connection closed by 10.0.0.1 port 52340 Mar 21 12:36:33.968058 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:33.971526 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:52340.service: Deactivated successfully. Mar 21 12:36:33.974014 systemd[1]: session-22.scope: Deactivated successfully. Mar 21 12:36:33.975091 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Mar 21 12:36:33.977414 systemd-logind[1470]: Removed session 22. Mar 21 12:36:38.980973 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). Mar 21 12:36:39.020587 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:39.021726 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:39.025536 systemd-logind[1470]: New session 23 of user core. Mar 21 12:36:39.036962 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 21 12:36:39.151562 sshd[4295]: Connection closed by 10.0.0.1 port 52348 Mar 21 12:36:39.151897 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:39.154991 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:52348.service: Deactivated successfully. Mar 21 12:36:39.156652 systemd[1]: session-23.scope: Deactivated successfully. Mar 21 12:36:39.158148 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Mar 21 12:36:39.158991 systemd-logind[1470]: Removed session 23. Mar 21 12:36:40.471581 kubelet[2714]: E0321 12:36:40.471548 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:41.472166 kubelet[2714]: E0321 12:36:41.472072 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:44.167431 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:35368.service - OpenSSH per-connection server daemon (10.0.0.1:35368). Mar 21 12:36:44.208165 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 35368 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:44.209291 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:44.214378 systemd-logind[1470]: New session 24 of user core. Mar 21 12:36:44.219895 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 21 12:36:44.327240 sshd[4310]: Connection closed by 10.0.0.1 port 35368 Mar 21 12:36:44.327856 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:44.340085 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:35368.service: Deactivated successfully. Mar 21 12:36:44.341876 systemd[1]: session-24.scope: Deactivated successfully. Mar 21 12:36:44.343714 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. Mar 21 12:36:44.344905 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Mar 21 12:36:44.346542 systemd-logind[1470]: Removed session 24. Mar 21 12:36:44.393204 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:44.394357 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:44.398279 systemd-logind[1470]: New session 25 of user core. Mar 21 12:36:44.409900 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 21 12:36:46.238901 containerd[1489]: time="2025-03-21T12:36:46.236315722Z" level=info msg="StopContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" with timeout 30 (s)" Mar 21 12:36:46.238901 containerd[1489]: time="2025-03-21T12:36:46.237296861Z" level=info msg="Stop container \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" with signal terminated" Mar 21 12:36:46.268222 systemd[1]: cri-containerd-b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027.scope: Deactivated successfully. Mar 21 12:36:46.270459 containerd[1489]: time="2025-03-21T12:36:46.270421034Z" level=info msg="received exit event container_id:\"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" id:\"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" pid:3123 exited_at:{seconds:1742560606 nanos:269788382}" Mar 21 12:36:46.270698 containerd[1489]: time="2025-03-21T12:36:46.270544917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" id:\"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" pid:3123 exited_at:{seconds:1742560606 nanos:269788382}" Mar 21 12:36:46.288702 containerd[1489]: time="2025-03-21T12:36:46.288667834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" id:\"7e660c74eec130868c7cd9f316ff0eb9b3ada6abaac9aada2e8404a5031d330e\" pid:4353 exited_at:{seconds:1742560606 nanos:288429190}" Mar 21 12:36:46.290717 containerd[1489]: time="2025-03-21T12:36:46.290666554Z" level=info msg="StopContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" with timeout 2 (s)" Mar 21 12:36:46.290999 containerd[1489]: time="2025-03-21T12:36:46.290975680Z" level=info msg="Stop container \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" with signal terminated" Mar 21 12:36:46.293655 containerd[1489]: time="2025-03-21T12:36:46.293598011Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:36:46.293988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027-rootfs.mount: Deactivated successfully. Mar 21 12:36:46.300915 systemd-networkd[1431]: lxc_health: Link DOWN Mar 21 12:36:46.300924 systemd-networkd[1431]: lxc_health: Lost carrier Mar 21 12:36:46.306873 containerd[1489]: time="2025-03-21T12:36:46.306725950Z" level=info msg="StopContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" returns successfully" Mar 21 12:36:46.307664 containerd[1489]: time="2025-03-21T12:36:46.307639528Z" level=info msg="StopPodSandbox for \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\"" Mar 21 12:36:46.307994 containerd[1489]: time="2025-03-21T12:36:46.307818612Z" level=info msg="Container to stop \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.313703 systemd[1]: cri-containerd-307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a.scope: Deactivated successfully. Mar 21 12:36:46.315092 systemd[1]: cri-containerd-d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961.scope: Deactivated successfully. Mar 21 12:36:46.315369 systemd[1]: cri-containerd-d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961.scope: Consumed 6.731s CPU time, 126.4M memory peak, 196K read from disk, 12.9M written to disk. Mar 21 12:36:46.316352 containerd[1489]: time="2025-03-21T12:36:46.316069375Z" level=info msg="received exit event container_id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" pid:3374 exited_at:{seconds:1742560606 nanos:315489763}" Mar 21 12:36:46.316352 containerd[1489]: time="2025-03-21T12:36:46.316317140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" id:\"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" pid:3374 exited_at:{seconds:1742560606 nanos:315489763}" Mar 21 12:36:46.321457 containerd[1489]: time="2025-03-21T12:36:46.321429840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" id:\"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" pid:2828 exit_status:137 exited_at:{seconds:1742560606 nanos:320955831}" Mar 21 12:36:46.336710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961-rootfs.mount: Deactivated successfully. Mar 21 12:36:46.341551 containerd[1489]: time="2025-03-21T12:36:46.341431275Z" level=info msg="StopContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" returns successfully" Mar 21 12:36:46.342100 containerd[1489]: time="2025-03-21T12:36:46.342079968Z" level=info msg="StopPodSandbox for \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\"" Mar 21 12:36:46.342477 containerd[1489]: time="2025-03-21T12:36:46.342238411Z" level=info msg="Container to stop \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.342477 containerd[1489]: time="2025-03-21T12:36:46.342267651Z" level=info msg="Container to stop \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.342477 containerd[1489]: time="2025-03-21T12:36:46.342277652Z" level=info msg="Container to stop \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.342477 containerd[1489]: time="2025-03-21T12:36:46.342288092Z" level=info msg="Container to stop \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.342477 containerd[1489]: time="2025-03-21T12:36:46.342296932Z" level=info msg="Container to stop \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:36:46.345286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a-rootfs.mount: Deactivated successfully. Mar 21 12:36:46.348436 containerd[1489]: time="2025-03-21T12:36:46.348409413Z" level=info msg="shim disconnected" id=307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a namespace=k8s.io Mar 21 12:36:46.349217 containerd[1489]: time="2025-03-21T12:36:46.348434533Z" level=warning msg="cleaning up after shim disconnected" id=307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a namespace=k8s.io Mar 21 12:36:46.349217 containerd[1489]: time="2025-03-21T12:36:46.348463574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 12:36:46.349862 systemd[1]: cri-containerd-4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8.scope: Deactivated successfully. Mar 21 12:36:46.376510 containerd[1489]: time="2025-03-21T12:36:46.376411925Z" level=info msg="TearDown network for sandbox \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" successfully" Mar 21 12:36:46.376510 containerd[1489]: time="2025-03-21T12:36:46.376447046Z" level=info msg="StopPodSandbox for \"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" returns successfully" Mar 21 12:36:46.377917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a-shm.mount: Deactivated successfully. Mar 21 12:36:46.380461 containerd[1489]: time="2025-03-21T12:36:46.380426924Z" level=info msg="received exit event sandbox_id:\"307bb08165793f82e74dbbe1b12895ca2a86670a0e65673a2de0918f43985e0a\" exit_status:137 exited_at:{seconds:1742560606 nanos:320955831}" Mar 21 12:36:46.380508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8-rootfs.mount: Deactivated successfully. Mar 21 12:36:46.382589 containerd[1489]: time="2025-03-21T12:36:46.382491085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" id:\"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" pid:2912 exit_status:137 exited_at:{seconds:1742560606 nanos:355054984}" Mar 21 12:36:46.383785 containerd[1489]: time="2025-03-21T12:36:46.383729349Z" level=info msg="received exit event sandbox_id:\"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" exit_status:137 exited_at:{seconds:1742560606 nanos:355054984}" Mar 21 12:36:46.384115 containerd[1489]: time="2025-03-21T12:36:46.384077756Z" level=info msg="TearDown network for sandbox \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" successfully" Mar 21 12:36:46.384115 containerd[1489]: time="2025-03-21T12:36:46.384104877Z" level=info msg="StopPodSandbox for \"4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8\" returns successfully" Mar 21 12:36:46.384817 containerd[1489]: time="2025-03-21T12:36:46.384706209Z" level=info msg="shim disconnected" id=4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8 namespace=k8s.io Mar 21 12:36:46.384991 containerd[1489]: time="2025-03-21T12:36:46.384780370Z" level=warning msg="cleaning up after shim disconnected" id=4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8 namespace=k8s.io Mar 21 12:36:46.384991 containerd[1489]: time="2025-03-21T12:36:46.384928293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 12:36:46.582946 kubelet[2714]: I0321 12:36:46.582814 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-run\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.582946 kubelet[2714]: I0321 12:36:46.582854 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-hostproc\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.582946 kubelet[2714]: I0321 12:36:46.582879 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-config-path\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.582946 kubelet[2714]: I0321 12:36:46.582896 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-lib-modules\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.587856 kubelet[2714]: I0321 12:36:46.587680 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.587856 kubelet[2714]: I0321 12:36:46.587821 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.588013 kubelet[2714]: I0321 12:36:46.587882 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-hostproc" (OuterVolumeSpecName: "hostproc") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.588948 kubelet[2714]: I0321 12:36:46.582915 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ee24c3c-a6df-43ce-89d8-011d5512230d-clustermesh-secrets\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.588989 kubelet[2714]: I0321 12:36:46.588972 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-cgroup\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589021 kubelet[2714]: I0321 12:36:46.588996 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trcvd\" (UniqueName: \"kubernetes.io/projected/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-kube-api-access-trcvd\") pod \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\" (UID: \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\") " Mar 21 12:36:46.589021 kubelet[2714]: I0321 12:36:46.589016 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-kernel\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589078 kubelet[2714]: I0321 12:36:46.589032 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-hubble-tls\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589078 kubelet[2714]: I0321 12:36:46.589047 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cni-path\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589078 kubelet[2714]: I0321 12:36:46.589063 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-net\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589162 kubelet[2714]: I0321 12:36:46.589078 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-xtables-lock\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589162 kubelet[2714]: I0321 12:36:46.589095 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-cilium-config-path\") pod \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\" (UID: \"dbfc05fd-6caf-4011-b1b5-69d435f3baeb\") " Mar 21 12:36:46.589162 kubelet[2714]: I0321 12:36:46.589115 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsltj\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589162 kubelet[2714]: I0321 12:36:46.589129 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-etc-cni-netd\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589162 kubelet[2714]: I0321 12:36:46.589145 2714 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-bpf-maps\") pod \"4ee24c3c-a6df-43ce-89d8-011d5512230d\" (UID: \"4ee24c3c-a6df-43ce-89d8-011d5512230d\") " Mar 21 12:36:46.589267 kubelet[2714]: I0321 12:36:46.589196 2714 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.589267 kubelet[2714]: I0321 12:36:46.589205 2714 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.589267 kubelet[2714]: I0321 12:36:46.589216 2714 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.589267 kubelet[2714]: I0321 12:36:46.589246 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.589267 kubelet[2714]: I0321 12:36:46.589267 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.601715 kubelet[2714]: I0321 12:36:46.601435 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 12:36:46.601715 kubelet[2714]: I0321 12:36:46.601496 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.601715 kubelet[2714]: I0321 12:36:46.601516 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cni-path" (OuterVolumeSpecName: "cni-path") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.601715 kubelet[2714]: I0321 12:36:46.601533 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.601715 kubelet[2714]: I0321 12:36:46.601550 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.602287 kubelet[2714]: I0321 12:36:46.602188 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-kube-api-access-trcvd" (OuterVolumeSpecName: "kube-api-access-trcvd") pod "dbfc05fd-6caf-4011-b1b5-69d435f3baeb" (UID: "dbfc05fd-6caf-4011-b1b5-69d435f3baeb"). InnerVolumeSpecName "kube-api-access-trcvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:36:46.602287 kubelet[2714]: I0321 12:36:46.602237 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:36:46.602287 kubelet[2714]: I0321 12:36:46.602279 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:36:46.602386 kubelet[2714]: I0321 12:36:46.602372 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee24c3c-a6df-43ce-89d8-011d5512230d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 12:36:46.603454 kubelet[2714]: I0321 12:36:46.603427 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj" (OuterVolumeSpecName: "kube-api-access-hsltj") pod "4ee24c3c-a6df-43ce-89d8-011d5512230d" (UID: "4ee24c3c-a6df-43ce-89d8-011d5512230d"). InnerVolumeSpecName "kube-api-access-hsltj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:36:46.604064 kubelet[2714]: I0321 12:36:46.604027 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbfc05fd-6caf-4011-b1b5-69d435f3baeb" (UID: "dbfc05fd-6caf-4011-b1b5-69d435f3baeb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 12:36:46.689682 kubelet[2714]: I0321 12:36:46.689621 2714 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689682 kubelet[2714]: I0321 12:36:46.689658 2714 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689682 kubelet[2714]: I0321 12:36:46.689669 2714 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689682 kubelet[2714]: I0321 12:36:46.689677 2714 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689704 2714 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hsltj\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-kube-api-access-hsltj\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689724 2714 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689733 2714 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689742 2714 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689769 2714 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-trcvd\" (UniqueName: \"kubernetes.io/projected/dbfc05fd-6caf-4011-b1b5-69d435f3baeb-kube-api-access-trcvd\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689778 2714 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ee24c3c-a6df-43ce-89d8-011d5512230d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689785 2714 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.689864 kubelet[2714]: I0321 12:36:46.689792 2714 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ee24c3c-a6df-43ce-89d8-011d5512230d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.690019 kubelet[2714]: I0321 12:36:46.689800 2714 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ee24c3c-a6df-43ce-89d8-011d5512230d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 21 12:36:46.691590 kubelet[2714]: I0321 12:36:46.691571 2714 scope.go:117] "RemoveContainer" containerID="b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027" Mar 21 12:36:46.694223 containerd[1489]: time="2025-03-21T12:36:46.693630742Z" level=info msg="RemoveContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\"" Mar 21 12:36:46.700922 containerd[1489]: time="2025-03-21T12:36:46.700881245Z" level=info msg="RemoveContainer for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" returns successfully" Mar 21 12:36:46.701518 kubelet[2714]: I0321 12:36:46.701448 2714 scope.go:117] "RemoveContainer" containerID="b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027" Mar 21 12:36:46.702757 containerd[1489]: time="2025-03-21T12:36:46.702550838Z" level=error msg="ContainerStatus for \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\": not found" Mar 21 12:36:46.702954 systemd[1]: Removed slice kubepods-besteffort-poddbfc05fd_6caf_4011_b1b5_69d435f3baeb.slice - libcontainer container kubepods-besteffort-poddbfc05fd_6caf_4011_b1b5_69d435f3baeb.slice. Mar 21 12:36:46.704622 systemd[1]: Removed slice kubepods-burstable-pod4ee24c3c_a6df_43ce_89d8_011d5512230d.slice - libcontainer container kubepods-burstable-pod4ee24c3c_a6df_43ce_89d8_011d5512230d.slice. Mar 21 12:36:46.704835 systemd[1]: kubepods-burstable-pod4ee24c3c_a6df_43ce_89d8_011d5512230d.slice: Consumed 6.878s CPU time, 126.7M memory peak, 296K read from disk, 16.1M written to disk. Mar 21 12:36:46.711404 kubelet[2714]: E0321 12:36:46.711368 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\": not found" containerID="b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027" Mar 21 12:36:46.711487 kubelet[2714]: I0321 12:36:46.711407 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027"} err="failed to get container status \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\": rpc error: code = NotFound desc = an error occurred when try to find container \"b83c9dfa8c38bcdb4aa03eb16da2862043294a42ca65a013622676e2399de027\": not found" Mar 21 12:36:46.711536 kubelet[2714]: I0321 12:36:46.711487 2714 scope.go:117] "RemoveContainer" containerID="d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961" Mar 21 12:36:46.714466 containerd[1489]: time="2025-03-21T12:36:46.714417872Z" level=info msg="RemoveContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\"" Mar 21 12:36:46.724072 containerd[1489]: time="2025-03-21T12:36:46.723935060Z" level=info msg="RemoveContainer for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" returns successfully" Mar 21 12:36:46.724459 kubelet[2714]: I0321 12:36:46.724344 2714 scope.go:117] "RemoveContainer" containerID="4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a" Mar 21 12:36:46.726931 containerd[1489]: time="2025-03-21T12:36:46.726903479Z" level=info msg="RemoveContainer for \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\"" Mar 21 12:36:46.730452 containerd[1489]: time="2025-03-21T12:36:46.730419708Z" level=info msg="RemoveContainer for \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" returns successfully" Mar 21 12:36:46.730617 kubelet[2714]: I0321 12:36:46.730594 2714 scope.go:117] "RemoveContainer" containerID="d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc" Mar 21 12:36:46.732737 containerd[1489]: time="2025-03-21T12:36:46.732684073Z" level=info msg="RemoveContainer for \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\"" Mar 21 12:36:46.735896 containerd[1489]: time="2025-03-21T12:36:46.735866215Z" level=info msg="RemoveContainer for \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" returns successfully" Mar 21 12:36:46.736060 kubelet[2714]: I0321 12:36:46.736016 2714 scope.go:117] "RemoveContainer" containerID="ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70" Mar 21 12:36:46.737391 containerd[1489]: time="2025-03-21T12:36:46.737367125Z" level=info msg="RemoveContainer for \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\"" Mar 21 12:36:46.740037 containerd[1489]: time="2025-03-21T12:36:46.740012857Z" level=info msg="RemoveContainer for \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" returns successfully" Mar 21 12:36:46.740182 kubelet[2714]: I0321 12:36:46.740133 2714 scope.go:117] "RemoveContainer" containerID="d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b" Mar 21 12:36:46.741531 containerd[1489]: time="2025-03-21T12:36:46.741503207Z" level=info msg="RemoveContainer for \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\"" Mar 21 12:36:46.743994 containerd[1489]: time="2025-03-21T12:36:46.743970135Z" level=info msg="RemoveContainer for \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" returns successfully" Mar 21 12:36:46.744170 kubelet[2714]: I0321 12:36:46.744114 2714 scope.go:117] "RemoveContainer" containerID="d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961" Mar 21 12:36:46.744428 containerd[1489]: time="2025-03-21T12:36:46.744342903Z" level=error msg="ContainerStatus for \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\": not found" Mar 21 12:36:46.744515 kubelet[2714]: E0321 12:36:46.744492 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\": not found" containerID="d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961" Mar 21 12:36:46.744553 kubelet[2714]: I0321 12:36:46.744519 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961"} err="failed to get container status \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\": rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5b2cfa3cc92ca57020eb0fd256475be345f1a49886ed8079c749c1447961\": not found" Mar 21 12:36:46.744553 kubelet[2714]: I0321 12:36:46.744540 2714 scope.go:117] "RemoveContainer" containerID="4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a" Mar 21 12:36:46.744784 containerd[1489]: time="2025-03-21T12:36:46.744715550Z" level=error msg="ContainerStatus for \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\": not found" Mar 21 12:36:46.744895 kubelet[2714]: E0321 12:36:46.744875 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\": not found" containerID="4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a" Mar 21 12:36:46.744933 kubelet[2714]: I0321 12:36:46.744900 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a"} err="failed to get container status \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a5331abf2039236fd02db5af943b4d02b0c1a89278e60b49ad291e9c962b29a\": not found" Mar 21 12:36:46.744933 kubelet[2714]: I0321 12:36:46.744919 2714 scope.go:117] "RemoveContainer" containerID="d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc" Mar 21 12:36:46.745196 containerd[1489]: time="2025-03-21T12:36:46.745106078Z" level=error msg="ContainerStatus for \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\": not found" Mar 21 12:36:46.745254 kubelet[2714]: E0321 12:36:46.745240 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\": not found" containerID="d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc" Mar 21 12:36:46.745309 kubelet[2714]: I0321 12:36:46.745257 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc"} err="failed to get container status \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6c149984a11442c06057b930a8cbdadf974efffcf158607e74dcd59c36cb1dc\": not found" Mar 21 12:36:46.745309 kubelet[2714]: I0321 12:36:46.745272 2714 scope.go:117] "RemoveContainer" containerID="ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70" Mar 21 12:36:46.745444 containerd[1489]: time="2025-03-21T12:36:46.745416684Z" level=error msg="ContainerStatus for \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\": not found" Mar 21 12:36:46.745568 kubelet[2714]: E0321 12:36:46.745543 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\": not found" containerID="ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70" Mar 21 12:36:46.745604 kubelet[2714]: I0321 12:36:46.745567 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70"} err="failed to get container status \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac519f07a98dc90a02fb85e4548f3f3ae8eaeec1de39a16e85b6223bd6617a70\": not found" Mar 21 12:36:46.745604 kubelet[2714]: I0321 12:36:46.745583 2714 scope.go:117] "RemoveContainer" containerID="d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b" Mar 21 12:36:46.745795 containerd[1489]: time="2025-03-21T12:36:46.745763611Z" level=error msg="ContainerStatus for \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\": not found" Mar 21 12:36:46.745922 kubelet[2714]: E0321 12:36:46.745898 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\": not found" containerID="d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b" Mar 21 12:36:46.745961 kubelet[2714]: I0321 12:36:46.745931 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b"} err="failed to get container status \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9661b7b1b0fc8231bd98c1fd938cc30a824a2f0b989ba8fb62c59eab843857b\": not found" Mar 21 12:36:47.293347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4702a734e5d670d37a627de93663965b32276b8b6e203ab2d6c19e8c062667d8-shm.mount: Deactivated successfully. Mar 21 12:36:47.293456 systemd[1]: var-lib-kubelet-pods-4ee24c3c\x2da6df\x2d43ce\x2d89d8\x2d011d5512230d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhsltj.mount: Deactivated successfully. Mar 21 12:36:47.293517 systemd[1]: var-lib-kubelet-pods-dbfc05fd\x2d6caf\x2d4011\x2db1b5\x2d69d435f3baeb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrcvd.mount: Deactivated successfully. Mar 21 12:36:47.293568 systemd[1]: var-lib-kubelet-pods-4ee24c3c\x2da6df\x2d43ce\x2d89d8\x2d011d5512230d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 21 12:36:47.293621 systemd[1]: var-lib-kubelet-pods-4ee24c3c\x2da6df\x2d43ce\x2d89d8\x2d011d5512230d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 21 12:36:47.518716 kubelet[2714]: E0321 12:36:47.518668 2714 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 21 12:36:48.192262 sshd[4325]: Connection closed by 10.0.0.1 port 35372 Mar 21 12:36:48.192882 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:48.199995 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:35372.service: Deactivated successfully. Mar 21 12:36:48.201487 systemd[1]: session-25.scope: Deactivated successfully. Mar 21 12:36:48.201672 systemd[1]: session-25.scope: Consumed 1.163s CPU time, 25.8M memory peak. Mar 21 12:36:48.202144 systemd-logind[1470]: Session 25 logged out. Waiting for processes to exit. Mar 21 12:36:48.203899 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:35376.service - OpenSSH per-connection server daemon (10.0.0.1:35376). Mar 21 12:36:48.208015 systemd-logind[1470]: Removed session 25. Mar 21 12:36:48.277234 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 35376 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:48.278257 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:48.281961 systemd-logind[1470]: New session 26 of user core. Mar 21 12:36:48.291865 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 21 12:36:48.473278 kubelet[2714]: I0321 12:36:48.473090 2714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" path="/var/lib/kubelet/pods/4ee24c3c-a6df-43ce-89d8-011d5512230d/volumes" Mar 21 12:36:48.474508 kubelet[2714]: I0321 12:36:48.474193 2714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfc05fd-6caf-4011-b1b5-69d435f3baeb" path="/var/lib/kubelet/pods/dbfc05fd-6caf-4011-b1b5-69d435f3baeb/volumes" Mar 21 12:36:49.152074 sshd[4480]: Connection closed by 10.0.0.1 port 35376 Mar 21 12:36:49.154889 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:49.162041 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:35376.service: Deactivated successfully. Mar 21 12:36:49.170770 kubelet[2714]: I0321 12:36:49.167553 2714 topology_manager.go:215] "Topology Admit Handler" podUID="405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1" podNamespace="kube-system" podName="cilium-lp6xc" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167678 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="mount-bpf-fs" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167687 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="clean-cilium-state" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167693 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="cilium-agent" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167700 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbfc05fd-6caf-4011-b1b5-69d435f3baeb" containerName="cilium-operator" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167706 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="mount-cgroup" Mar 21 12:36:49.170770 kubelet[2714]: E0321 12:36:49.167711 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="apply-sysctl-overwrites" Mar 21 12:36:49.170770 kubelet[2714]: I0321 12:36:49.167733 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfc05fd-6caf-4011-b1b5-69d435f3baeb" containerName="cilium-operator" Mar 21 12:36:49.170770 kubelet[2714]: I0321 12:36:49.167740 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee24c3c-a6df-43ce-89d8-011d5512230d" containerName="cilium-agent" Mar 21 12:36:49.170723 systemd[1]: session-26.scope: Deactivated successfully. Mar 21 12:36:49.176855 systemd-logind[1470]: Session 26 logged out. Waiting for processes to exit. Mar 21 12:36:49.181200 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Mar 21 12:36:49.184655 systemd-logind[1470]: Removed session 26. Mar 21 12:36:49.196575 systemd[1]: Created slice kubepods-burstable-pod405cb6bb_1a9e_47fb_8c9f_e4c507f52ed1.slice - libcontainer container kubepods-burstable-pod405cb6bb_1a9e_47fb_8c9f_e4c507f52ed1.slice. Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205856 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-cilium-cgroup\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205891 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-host-proc-sys-kernel\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205908 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsmc\" (UniqueName: \"kubernetes.io/projected/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-kube-api-access-ljsmc\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205924 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-cni-path\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205939 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-xtables-lock\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206205 kubelet[2714]: I0321 12:36:49.205954 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-bpf-maps\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.205968 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-hostproc\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.205983 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-clustermesh-secrets\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.205997 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-cilium-config-path\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.206012 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-cilium-ipsec-secrets\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.206032 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-hubble-tls\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206411 kubelet[2714]: I0321 12:36:49.206051 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-cilium-run\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206526 kubelet[2714]: I0321 12:36:49.206069 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-lib-modules\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206526 kubelet[2714]: I0321 12:36:49.206086 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-etc-cni-netd\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.206526 kubelet[2714]: I0321 12:36:49.206100 2714 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1-host-proc-sys-net\") pod \"cilium-lp6xc\" (UID: \"405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1\") " pod="kube-system/cilium-lp6xc" Mar 21 12:36:49.233017 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:49.234154 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:49.238334 systemd-logind[1470]: New session 27 of user core. Mar 21 12:36:49.247903 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 21 12:36:49.296324 sshd[4494]: Connection closed by 10.0.0.1 port 35384 Mar 21 12:36:49.296803 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:49.320444 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:35384.service: Deactivated successfully. Mar 21 12:36:49.322019 systemd[1]: session-27.scope: Deactivated successfully. Mar 21 12:36:49.323301 systemd-logind[1470]: Session 27 logged out. Waiting for processes to exit. Mar 21 12:36:49.324686 systemd[1]: Started sshd@27-10.0.0.98:22-10.0.0.1:35398.service - OpenSSH per-connection server daemon (10.0.0.1:35398). Mar 21 12:36:49.327812 systemd-logind[1470]: Removed session 27. Mar 21 12:36:49.370871 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 35398 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:36:49.372099 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:49.376827 systemd-logind[1470]: New session 28 of user core. Mar 21 12:36:49.401418 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 21 12:36:49.503513 kubelet[2714]: E0321 12:36:49.503364 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:49.503993 containerd[1489]: time="2025-03-21T12:36:49.503847619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lp6xc,Uid:405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:49.520930 containerd[1489]: time="2025-03-21T12:36:49.520887536Z" level=info msg="connecting to shim 007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:49.544926 systemd[1]: Started cri-containerd-007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce.scope - libcontainer container 007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce. Mar 21 12:36:49.570054 containerd[1489]: time="2025-03-21T12:36:49.569913526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lp6xc,Uid:405cb6bb-1a9e-47fb-8c9f-e4c507f52ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\"" Mar 21 12:36:49.571190 kubelet[2714]: E0321 12:36:49.571108 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:49.573965 containerd[1489]: time="2025-03-21T12:36:49.573843719Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 12:36:49.581092 containerd[1489]: time="2025-03-21T12:36:49.581047933Z" level=info msg="Container 06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:49.587067 containerd[1489]: time="2025-03-21T12:36:49.587007284Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\"" Mar 21 12:36:49.587717 containerd[1489]: time="2025-03-21T12:36:49.587675136Z" level=info msg="StartContainer for \"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\"" Mar 21 12:36:49.588547 containerd[1489]: time="2025-03-21T12:36:49.588521792Z" level=info msg="connecting to shim 06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" protocol=ttrpc version=3 Mar 21 12:36:49.605902 systemd[1]: Started cri-containerd-06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb.scope - libcontainer container 06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb. Mar 21 12:36:49.629615 containerd[1489]: time="2025-03-21T12:36:49.629506793Z" level=info msg="StartContainer for \"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\" returns successfully" Mar 21 12:36:49.644345 systemd[1]: cri-containerd-06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb.scope: Deactivated successfully. Mar 21 12:36:49.645937 containerd[1489]: time="2025-03-21T12:36:49.645819416Z" level=info msg="received exit event container_id:\"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\" id:\"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\" pid:4575 exited_at:{seconds:1742560609 nanos:645500250}" Mar 21 12:36:49.645937 containerd[1489]: time="2025-03-21T12:36:49.645827096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\" id:\"06789627eb788035254257f60581f9816e3372f0a9916c5ed0a1ef299125aefb\" pid:4575 exited_at:{seconds:1742560609 nanos:645500250}" Mar 21 12:36:49.708053 kubelet[2714]: E0321 12:36:49.707821 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:49.710876 containerd[1489]: time="2025-03-21T12:36:49.710752582Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 12:36:49.724163 containerd[1489]: time="2025-03-21T12:36:49.724098910Z" level=info msg="Container 0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:49.731171 containerd[1489]: time="2025-03-21T12:36:49.731127240Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\"" Mar 21 12:36:49.731877 containerd[1489]: time="2025-03-21T12:36:49.731826813Z" level=info msg="StartContainer for \"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\"" Mar 21 12:36:49.748227 containerd[1489]: time="2025-03-21T12:36:49.748181317Z" level=info msg="connecting to shim 0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" protocol=ttrpc version=3 Mar 21 12:36:49.765895 systemd[1]: Started cri-containerd-0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c.scope - libcontainer container 0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c. Mar 21 12:36:49.788510 containerd[1489]: time="2025-03-21T12:36:49.788474586Z" level=info msg="StartContainer for \"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\" returns successfully" Mar 21 12:36:49.800341 systemd[1]: cri-containerd-0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c.scope: Deactivated successfully. Mar 21 12:36:49.806039 containerd[1489]: time="2025-03-21T12:36:49.806009151Z" level=info msg="received exit event container_id:\"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\" id:\"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\" pid:4620 exited_at:{seconds:1742560609 nanos:800818775}" Mar 21 12:36:49.806246 containerd[1489]: time="2025-03-21T12:36:49.806100273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\" id:\"0da7eb0428f045cce5b4a9aa47d624c53103910062293132fdcdf01ab57f820c\" pid:4620 exited_at:{seconds:1742560609 nanos:800818775}" Mar 21 12:36:50.314703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492512554.mount: Deactivated successfully. Mar 21 12:36:50.711574 kubelet[2714]: E0321 12:36:50.711522 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:50.715359 containerd[1489]: time="2025-03-21T12:36:50.715308983Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 12:36:50.724591 containerd[1489]: time="2025-03-21T12:36:50.723954700Z" level=info msg="Container 884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:50.728559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878176795.mount: Deactivated successfully. Mar 21 12:36:50.733822 containerd[1489]: time="2025-03-21T12:36:50.733785479Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\"" Mar 21 12:36:50.734311 containerd[1489]: time="2025-03-21T12:36:50.734287568Z" level=info msg="StartContainer for \"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\"" Mar 21 12:36:50.735612 containerd[1489]: time="2025-03-21T12:36:50.735560072Z" level=info msg="connecting to shim 884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" protocol=ttrpc version=3 Mar 21 12:36:50.754883 systemd[1]: Started cri-containerd-884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840.scope - libcontainer container 884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840. Mar 21 12:36:50.783015 systemd[1]: cri-containerd-884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840.scope: Deactivated successfully. Mar 21 12:36:50.783832 containerd[1489]: time="2025-03-21T12:36:50.782891374Z" level=info msg="StartContainer for \"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\" returns successfully" Mar 21 12:36:50.783876 containerd[1489]: time="2025-03-21T12:36:50.783836231Z" level=info msg="received exit event container_id:\"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\" id:\"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\" pid:4664 exited_at:{seconds:1742560610 nanos:783575626}" Mar 21 12:36:50.784056 containerd[1489]: time="2025-03-21T12:36:50.784028554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\" id:\"884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840\" pid:4664 exited_at:{seconds:1742560610 nanos:783575626}" Mar 21 12:36:50.801272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-884c1d9d9dbf1888f42a065cb528bdb330561de2c681ce3857c745447a336840-rootfs.mount: Deactivated successfully. Mar 21 12:36:51.717796 kubelet[2714]: E0321 12:36:51.717488 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:51.721367 containerd[1489]: time="2025-03-21T12:36:51.721326775Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 12:36:51.728980 containerd[1489]: time="2025-03-21T12:36:51.728941031Z" level=info msg="Container 52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:51.736231 containerd[1489]: time="2025-03-21T12:36:51.736182440Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\"" Mar 21 12:36:51.736857 containerd[1489]: time="2025-03-21T12:36:51.736816131Z" level=info msg="StartContainer for \"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\"" Mar 21 12:36:51.737921 containerd[1489]: time="2025-03-21T12:36:51.737835429Z" level=info msg="connecting to shim 52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" protocol=ttrpc version=3 Mar 21 12:36:51.754928 systemd[1]: Started cri-containerd-52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3.scope - libcontainer container 52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3. Mar 21 12:36:51.776674 systemd[1]: cri-containerd-52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3.scope: Deactivated successfully. Mar 21 12:36:51.777050 containerd[1489]: time="2025-03-21T12:36:51.777015569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\" id:\"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\" pid:4703 exited_at:{seconds:1742560611 nanos:776736364}" Mar 21 12:36:51.778509 containerd[1489]: time="2025-03-21T12:36:51.778471835Z" level=info msg="received exit event container_id:\"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\" id:\"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\" pid:4703 exited_at:{seconds:1742560611 nanos:776736364}" Mar 21 12:36:51.780702 containerd[1489]: time="2025-03-21T12:36:51.780613274Z" level=info msg="StartContainer for \"52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3\" returns successfully" Mar 21 12:36:51.799795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a0c7ee46b773909da07ebb86831a1e99275f5dcbc13a17535ff0223d0bada3-rootfs.mount: Deactivated successfully. Mar 21 12:36:52.519969 kubelet[2714]: E0321 12:36:52.519929 2714 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 21 12:36:52.722493 kubelet[2714]: E0321 12:36:52.722457 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:52.725435 containerd[1489]: time="2025-03-21T12:36:52.725389787Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 12:36:52.735613 containerd[1489]: time="2025-03-21T12:36:52.735573605Z" level=info msg="Container ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:52.744536 containerd[1489]: time="2025-03-21T12:36:52.744490121Z" level=info msg="CreateContainer within sandbox \"007fda6fdd077ef2c0d88acedbcd8bd78e2c8f4ea2ea8ab333d5efc8c33667ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\"" Mar 21 12:36:52.744948 containerd[1489]: time="2025-03-21T12:36:52.744912609Z" level=info msg="StartContainer for \"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\"" Mar 21 12:36:52.745948 containerd[1489]: time="2025-03-21T12:36:52.745915306Z" level=info msg="connecting to shim ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297" address="unix:///run/containerd/s/b8a0df3972e46a91d1db2448d2defe7dfc39789236b807b1b118842592741ff6" protocol=ttrpc version=3 Mar 21 12:36:52.765965 systemd[1]: Started cri-containerd-ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297.scope - libcontainer container ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297. Mar 21 12:36:52.793790 containerd[1489]: time="2025-03-21T12:36:52.793349698Z" level=info msg="StartContainer for \"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" returns successfully" Mar 21 12:36:52.838145 containerd[1489]: time="2025-03-21T12:36:52.838100682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" id:\"a9ef1267b077ef98f544635bec49e74bdc6c454924d00eceb24214920091814c\" pid:4771 exited_at:{seconds:1742560612 nanos:837806357}" Mar 21 12:36:53.043801 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 21 12:36:53.724160 kubelet[2714]: I0321 12:36:53.724089 2714 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-21T12:36:53Z","lastTransitionTime":"2025-03-21T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 21 12:36:53.734572 kubelet[2714]: E0321 12:36:53.733385 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:55.471963 kubelet[2714]: E0321 12:36:55.471927 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:55.505601 kubelet[2714]: E0321 12:36:55.505554 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:55.799400 containerd[1489]: time="2025-03-21T12:36:55.799271728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" id:\"231d907ce038b6cd999b25a0ee9c3f89356668ac9ac033a64acd497457bb0e18\" pid:5232 exit_status:1 exited_at:{seconds:1742560615 nanos:798966603}" Mar 21 12:36:55.802834 systemd-networkd[1431]: lxc_health: Link UP Mar 21 12:36:55.816999 systemd-networkd[1431]: lxc_health: Gained carrier Mar 21 12:36:56.933905 systemd-networkd[1431]: lxc_health: Gained IPv6LL Mar 21 12:36:57.507542 kubelet[2714]: E0321 12:36:57.507140 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:57.520280 kubelet[2714]: I0321 12:36:57.520220 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lp6xc" podStartSLOduration=8.520205609 podStartE2EDuration="8.520205609s" podCreationTimestamp="2025-03-21 12:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:53.746323356 +0000 UTC m=+91.358439057" watchObservedRunningTime="2025-03-21 12:36:57.520205609 +0000 UTC m=+95.132321310" Mar 21 12:36:57.740885 kubelet[2714]: E0321 12:36:57.740830 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:57.943704 containerd[1489]: time="2025-03-21T12:36:57.943647340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" id:\"90faa85386aaa4f0709f1c7aee4431211e603c6156df836164e2af2fcf356f02\" pid:5311 exited_at:{seconds:1742560617 nanos:943319854}" Mar 21 12:37:00.043087 containerd[1489]: time="2025-03-21T12:37:00.043029521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" id:\"d975cac23eda19168e38938c8c5b804598c36f3ab1719febd2cbe50e571dd3e8\" pid:5344 exited_at:{seconds:1742560620 nanos:42521193}" Mar 21 12:37:02.137978 containerd[1489]: time="2025-03-21T12:37:02.137933391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec51e2671f4b0cec2d1193a09406d91ca9af847caffe40020201123b88450297\" id:\"aa46813c20397f7bc905e68c933d365b9ddee36a6bb096301793904245b409e2\" pid:5367 exited_at:{seconds:1742560622 nanos:137574305}" Mar 21 12:37:02.169359 sshd[4509]: Connection closed by 10.0.0.1 port 35398 Mar 21 12:37:02.170252 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:02.174210 systemd[1]: sshd@27-10.0.0.98:22-10.0.0.1:35398.service: Deactivated successfully. Mar 21 12:37:02.177594 systemd[1]: session-28.scope: Deactivated successfully. Mar 21 12:37:02.178277 systemd-logind[1470]: Session 28 logged out. Waiting for processes to exit. Mar 21 12:37:02.179193 systemd-logind[1470]: Removed session 28.