Mar 20 21:27:19.891346 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 21:27:19.891382 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 19:37:53 -00 2025 Mar 20 21:27:19.891393 kernel: KASLR enabled Mar 20 21:27:19.891398 kernel: efi: EFI v2.7 by EDK II Mar 20 21:27:19.891404 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40498 Mar 20 21:27:19.891409 kernel: random: crng init done Mar 20 21:27:19.891416 kernel: secureboot: Secure boot disabled Mar 20 21:27:19.891422 kernel: ACPI: Early table checksum verification disabled Mar 20 21:27:19.891428 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 21:27:19.891435 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:27:19.891441 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891446 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891452 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891458 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891464 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891472 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891478 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891484 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891490 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:27:19.891496 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 21:27:19.891502 kernel: NUMA: Failed to initialise from firmware Mar 20 21:27:19.891508 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:27:19.891514 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Mar 20 21:27:19.891520 kernel: Zone ranges: Mar 20 21:27:19.891526 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:27:19.891533 kernel: DMA32 empty Mar 20 21:27:19.891539 kernel: Normal empty Mar 20 21:27:19.891545 kernel: Movable zone start for each node Mar 20 21:27:19.891550 kernel: Early memory node ranges Mar 20 21:27:19.891556 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 21:27:19.891562 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 21:27:19.891568 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 21:27:19.891574 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 21:27:19.891580 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 21:27:19.891586 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 21:27:19.891592 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 21:27:19.891598 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 21:27:19.891605 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 21:27:19.891611 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:27:19.891617 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 21:27:19.891625 kernel: psci: probing for conduit method from ACPI. Mar 20 21:27:19.891632 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 21:27:19.891638 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 21:27:19.891646 kernel: psci: Trusted OS migration not required Mar 20 21:27:19.891652 kernel: psci: SMC Calling Convention v1.1 Mar 20 21:27:19.891659 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 21:27:19.891665 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 21:27:19.891671 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 21:27:19.891678 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 21:27:19.891684 kernel: Detected PIPT I-cache on CPU0 Mar 20 21:27:19.891690 kernel: CPU features: detected: GIC system register CPU interface Mar 20 21:27:19.891697 kernel: CPU features: detected: Hardware dirty bit management Mar 20 21:27:19.891711 kernel: CPU features: detected: Spectre-v4 Mar 20 21:27:19.891719 kernel: CPU features: detected: Spectre-BHB Mar 20 21:27:19.891725 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 21:27:19.891732 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 21:27:19.891738 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 21:27:19.891745 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 21:27:19.891751 kernel: alternatives: applying boot alternatives Mar 20 21:27:19.891758 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:27:19.891765 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:27:19.891771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:27:19.891778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:27:19.891784 kernel: Fallback order for Node 0: 0 Mar 20 21:27:19.891792 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 21:27:19.891798 kernel: Policy zone: DMA Mar 20 21:27:19.891804 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:27:19.891810 kernel: software IO TLB: area num 4. Mar 20 21:27:19.891817 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 21:27:19.891824 kernel: Memory: 2387408K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184880K reserved, 0K cma-reserved) Mar 20 21:27:19.891830 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:27:19.891837 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:27:19.891844 kernel: rcu: RCU event tracing is enabled. Mar 20 21:27:19.891850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:27:19.891857 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:27:19.891863 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:27:19.891871 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:27:19.891877 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:27:19.891884 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 21:27:19.891904 kernel: GICv3: 256 SPIs implemented Mar 20 21:27:19.891914 kernel: GICv3: 0 Extended SPIs implemented Mar 20 21:27:19.891921 kernel: Root IRQ handler: gic_handle_irq Mar 20 21:27:19.891928 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 21:27:19.891934 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 21:27:19.891941 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 21:27:19.891947 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 21:27:19.891954 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 21:27:19.891962 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 21:27:19.891969 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 21:27:19.891975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:27:19.891982 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:27:19.891988 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 21:27:19.891995 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 21:27:19.892001 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 21:27:19.892007 kernel: arm-pv: using stolen time PV Mar 20 21:27:19.892014 kernel: Console: colour dummy device 80x25 Mar 20 21:27:19.892021 kernel: ACPI: Core revision 20230628 Mar 20 21:27:19.892027 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 21:27:19.892035 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:27:19.892042 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:27:19.892054 kernel: landlock: Up and running. Mar 20 21:27:19.892060 kernel: SELinux: Initializing. Mar 20 21:27:19.892067 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:27:19.892074 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:27:19.892080 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:27:19.892087 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:27:19.892093 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:27:19.892102 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:27:19.892108 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 21:27:19.892114 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 21:27:19.892121 kernel: Remapping and enabling EFI services. Mar 20 21:27:19.892127 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:27:19.892134 kernel: Detected PIPT I-cache on CPU1 Mar 20 21:27:19.892140 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 21:27:19.892147 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 21:27:19.892154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:27:19.892161 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 21:27:19.892168 kernel: Detected PIPT I-cache on CPU2 Mar 20 21:27:19.892179 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 21:27:19.892187 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 21:27:19.892194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:27:19.892200 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 21:27:19.892207 kernel: Detected PIPT I-cache on CPU3 Mar 20 21:27:19.892214 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 21:27:19.892221 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 21:27:19.892229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:27:19.892236 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 21:27:19.892243 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:27:19.892249 kernel: SMP: Total of 4 processors activated. Mar 20 21:27:19.892256 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 21:27:19.892263 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 21:27:19.892270 kernel: CPU features: detected: Common not Private translations Mar 20 21:27:19.892277 kernel: CPU features: detected: CRC32 instructions Mar 20 21:27:19.892286 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 21:27:19.892292 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 21:27:19.892299 kernel: CPU features: detected: LSE atomic instructions Mar 20 21:27:19.892306 kernel: CPU features: detected: Privileged Access Never Mar 20 21:27:19.892313 kernel: CPU features: detected: RAS Extension Support Mar 20 21:27:19.892320 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 21:27:19.892327 kernel: CPU: All CPU(s) started at EL1 Mar 20 21:27:19.892333 kernel: alternatives: applying system-wide alternatives Mar 20 21:27:19.892340 kernel: devtmpfs: initialized Mar 20 21:27:19.892347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:27:19.892356 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:27:19.892363 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:27:19.892369 kernel: SMBIOS 3.0.0 present. Mar 20 21:27:19.892376 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 21:27:19.892383 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:27:19.892390 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 21:27:19.892397 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 21:27:19.892406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 21:27:19.892414 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:27:19.892421 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Mar 20 21:27:19.892428 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:27:19.892435 kernel: cpuidle: using governor menu Mar 20 21:27:19.892442 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 21:27:19.892448 kernel: ASID allocator initialised with 32768 entries Mar 20 21:27:19.892477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:27:19.892485 kernel: Serial: AMBA PL011 UART driver Mar 20 21:27:19.892491 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 21:27:19.892500 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 21:27:19.892507 kernel: Modules: 509248 pages in range for PLT usage Mar 20 21:27:19.892514 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:27:19.892520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:27:19.892527 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 21:27:19.892534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 21:27:19.892541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:27:19.892548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:27:19.892555 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 21:27:19.892563 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 21:27:19.892570 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:27:19.892576 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:27:19.892583 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:27:19.892590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:27:19.892597 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:27:19.892603 kernel: ACPI: Interpreter enabled Mar 20 21:27:19.892610 kernel: ACPI: Using GIC for interrupt routing Mar 20 21:27:19.892617 kernel: ACPI: MCFG table detected, 1 entries Mar 20 21:27:19.892624 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 21:27:19.892632 kernel: printk: console [ttyAMA0] enabled Mar 20 21:27:19.892639 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:27:19.892781 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:27:19.892857 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 21:27:19.892945 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 21:27:19.893015 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 21:27:19.893078 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 21:27:19.893090 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 21:27:19.893097 kernel: PCI host bridge to bus 0000:00 Mar 20 21:27:19.893166 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 21:27:19.893225 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 21:27:19.893282 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 21:27:19.893339 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:27:19.893418 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 21:27:19.893495 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:27:19.893561 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 21:27:19.893642 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 21:27:19.893716 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:27:19.893782 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:27:19.893845 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 21:27:19.893943 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 21:27:19.894006 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 21:27:19.894063 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 21:27:19.894120 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 21:27:19.894130 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 21:27:19.894137 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 21:27:19.894144 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 21:27:19.894150 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 21:27:19.894161 kernel: iommu: Default domain type: Translated Mar 20 21:27:19.894168 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 21:27:19.894175 kernel: efivars: Registered efivars operations Mar 20 21:27:19.894182 kernel: vgaarb: loaded Mar 20 21:27:19.894188 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 21:27:19.894195 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:27:19.894202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:27:19.894209 kernel: pnp: PnP ACPI init Mar 20 21:27:19.894279 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 21:27:19.894291 kernel: pnp: PnP ACPI: found 1 devices Mar 20 21:27:19.894298 kernel: NET: Registered PF_INET protocol family Mar 20 21:27:19.894305 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:27:19.894312 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:27:19.894319 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:27:19.894326 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:27:19.894337 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:27:19.894344 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:27:19.894352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:27:19.894359 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:27:19.894366 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:27:19.894373 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:27:19.894380 kernel: kvm [1]: HYP mode not available Mar 20 21:27:19.894387 kernel: Initialise system trusted keyrings Mar 20 21:27:19.894393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:27:19.894400 kernel: Key type asymmetric registered Mar 20 21:27:19.894407 kernel: Asymmetric key parser 'x509' registered Mar 20 21:27:19.894414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 21:27:19.894422 kernel: io scheduler mq-deadline registered Mar 20 21:27:19.894429 kernel: io scheduler kyber registered Mar 20 21:27:19.894436 kernel: io scheduler bfq registered Mar 20 21:27:19.894443 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 21:27:19.894451 kernel: ACPI: button: Power Button [PWRB] Mar 20 21:27:19.894458 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 21:27:19.894524 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 21:27:19.894534 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:27:19.894540 kernel: thunder_xcv, ver 1.0 Mar 20 21:27:19.894549 kernel: thunder_bgx, ver 1.0 Mar 20 21:27:19.894556 kernel: nicpf, ver 1.0 Mar 20 21:27:19.894562 kernel: nicvf, ver 1.0 Mar 20 21:27:19.894637 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 21:27:19.894705 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T21:27:19 UTC (1742506039) Mar 20 21:27:19.894715 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 21:27:19.894722 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 21:27:19.894729 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 21:27:19.894739 kernel: watchdog: Hard watchdog permanently disabled Mar 20 21:27:19.894745 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:27:19.894752 kernel: Segment Routing with IPv6 Mar 20 21:27:19.894759 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:27:19.894766 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:27:19.894772 kernel: Key type dns_resolver registered Mar 20 21:27:19.894779 kernel: registered taskstats version 1 Mar 20 21:27:19.894786 kernel: Loading compiled-in X.509 certificates Mar 20 21:27:19.894807 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 3a6f52a6c751e8bbe3389ae978b265effe8f77af' Mar 20 21:27:19.894816 kernel: Key type .fscrypt registered Mar 20 21:27:19.894822 kernel: Key type fscrypt-provisioning registered Mar 20 21:27:19.894829 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:27:19.894836 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:27:19.894843 kernel: ima: No architecture policies found Mar 20 21:27:19.894850 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 21:27:19.894857 kernel: clk: Disabling unused clocks Mar 20 21:27:19.894864 kernel: Freeing unused kernel memory: 38464K Mar 20 21:27:19.894872 kernel: Run /init as init process Mar 20 21:27:19.894878 kernel: with arguments: Mar 20 21:27:19.894885 kernel: /init Mar 20 21:27:19.894900 kernel: with environment: Mar 20 21:27:19.894907 kernel: HOME=/ Mar 20 21:27:19.894914 kernel: TERM=linux Mar 20 21:27:19.894920 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:27:19.894928 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:27:19.894938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:27:19.894948 systemd[1]: Detected virtualization kvm. Mar 20 21:27:19.894955 systemd[1]: Detected architecture arm64. Mar 20 21:27:19.894962 systemd[1]: Running in initrd. Mar 20 21:27:19.894969 systemd[1]: No hostname configured, using default hostname. Mar 20 21:27:19.894977 systemd[1]: Hostname set to . Mar 20 21:27:19.894984 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:27:19.894992 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:27:19.894999 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:27:19.895008 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:27:19.895016 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:27:19.895024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:27:19.895031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:27:19.895039 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:27:19.895048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:27:19.895057 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:27:19.895064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:27:19.895072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:27:19.895079 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:27:19.895087 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:27:19.895094 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:27:19.895102 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:27:19.895109 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:27:19.895116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:27:19.895125 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:27:19.895133 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:27:19.895140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:27:19.895147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:27:19.895155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:27:19.895163 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:27:19.895170 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:27:19.895177 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:27:19.895186 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:27:19.895193 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:27:19.895201 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:27:19.895208 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:27:19.895216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:27:19.895223 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:27:19.895230 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:27:19.895240 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:27:19.895247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:27:19.895271 systemd-journald[237]: Collecting audit messages is disabled. Mar 20 21:27:19.895291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:27:19.895299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:27:19.895306 kernel: Bridge firewalling registered Mar 20 21:27:19.895313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:27:19.895322 systemd-journald[237]: Journal started Mar 20 21:27:19.895341 systemd-journald[237]: Runtime Journal (/run/log/journal/03d868965463453ba53bc8da51655b16) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:27:19.872145 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 21:27:19.892431 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 21:27:19.898112 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:27:19.906179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:27:19.907193 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:27:19.911348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:27:19.913615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:27:19.919649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:27:19.925096 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:27:19.926332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:27:19.928392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:27:19.930346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:27:19.933736 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:27:19.936217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:27:19.952032 dracut-cmdline[275]: dracut-dracut-053 Mar 20 21:27:19.954210 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:27:19.972619 systemd-resolved[276]: Positive Trust Anchors: Mar 20 21:27:19.972639 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:27:19.972670 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:27:19.977463 systemd-resolved[276]: Defaulting to hostname 'linux'. Mar 20 21:27:19.978448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:27:19.980207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:27:20.015910 kernel: SCSI subsystem initialized Mar 20 21:27:20.020907 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:27:20.028927 kernel: iscsi: registered transport (tcp) Mar 20 21:27:20.040075 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:27:20.040108 kernel: QLogic iSCSI HBA Driver Mar 20 21:27:20.079131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:27:20.081489 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:27:20.115277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:27:20.115336 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:27:20.116129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:27:20.162923 kernel: raid6: neonx8 gen() 15782 MB/s Mar 20 21:27:20.179911 kernel: raid6: neonx4 gen() 15798 MB/s Mar 20 21:27:20.196930 kernel: raid6: neonx2 gen() 13214 MB/s Mar 20 21:27:20.213912 kernel: raid6: neonx1 gen() 10560 MB/s Mar 20 21:27:20.230919 kernel: raid6: int64x8 gen() 6786 MB/s Mar 20 21:27:20.247917 kernel: raid6: int64x4 gen() 7350 MB/s Mar 20 21:27:20.264911 kernel: raid6: int64x2 gen() 6111 MB/s Mar 20 21:27:20.281904 kernel: raid6: int64x1 gen() 5052 MB/s Mar 20 21:27:20.281929 kernel: raid6: using algorithm neonx4 gen() 15798 MB/s Mar 20 21:27:20.298922 kernel: raid6: .... xor() 12392 MB/s, rmw enabled Mar 20 21:27:20.298943 kernel: raid6: using neon recovery algorithm Mar 20 21:27:20.303909 kernel: xor: measuring software checksum speed Mar 20 21:27:20.303931 kernel: 8regs : 21596 MB/sec Mar 20 21:27:20.305298 kernel: 32regs : 20464 MB/sec Mar 20 21:27:20.305321 kernel: arm64_neon : 27936 MB/sec Mar 20 21:27:20.305331 kernel: xor: using function: arm64_neon (27936 MB/sec) Mar 20 21:27:20.359921 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:27:20.369754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:27:20.372268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:27:20.399125 systemd-udevd[459]: Using default interface naming scheme 'v255'. Mar 20 21:27:20.407158 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:27:20.409287 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:27:20.440732 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Mar 20 21:27:20.465587 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:27:20.467856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:27:20.523056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:27:20.530101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:27:20.549018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:27:20.550442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:27:20.552056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:27:20.554219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:27:20.557165 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:27:20.573132 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 21:27:20.585081 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:27:20.585172 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:27:20.585183 kernel: GPT:9289727 != 19775487 Mar 20 21:27:20.585192 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:27:20.585207 kernel: GPT:9289727 != 19775487 Mar 20 21:27:20.585216 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:27:20.585224 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:27:20.578104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:27:20.586349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:27:20.586455 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:27:20.588442 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:27:20.589336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:27:20.589535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:27:20.593287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:27:20.594958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:27:20.607723 kernel: BTRFS: device fsid 892d57a1-84f1-442c-90df-b8383db1b8c3 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (508) Mar 20 21:27:20.607761 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (521) Mar 20 21:27:20.617234 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:27:20.618647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:27:20.636227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:27:20.643855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:27:20.650202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:27:20.651410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:27:20.655250 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:27:20.657617 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:27:20.674370 disk-uuid[550]: Primary Header is updated. Mar 20 21:27:20.674370 disk-uuid[550]: Secondary Entries is updated. Mar 20 21:27:20.674370 disk-uuid[550]: Secondary Header is updated. Mar 20 21:27:20.677387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:27:20.683925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:27:21.691568 disk-uuid[555]: The operation has completed successfully. Mar 20 21:27:21.692718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:27:21.717464 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:27:21.717558 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:27:21.744266 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:27:21.755878 sh[570]: Success Mar 20 21:27:21.767922 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 21:27:21.798885 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:27:21.801441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:27:21.815402 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:27:21.825489 kernel: BTRFS info (device dm-0): first mount of filesystem 892d57a1-84f1-442c-90df-b8383db1b8c3 Mar 20 21:27:21.825560 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:27:21.825610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:27:21.825634 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:27:21.825668 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:27:21.829741 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:27:21.830882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:27:21.831651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:27:21.833512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:27:21.855202 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:27:21.855247 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:27:21.855257 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:27:21.858948 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:27:21.861914 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:27:21.865916 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:27:21.867829 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:27:21.931769 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:27:21.934467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:27:21.976087 systemd-networkd[759]: lo: Link UP Mar 20 21:27:21.976096 systemd-networkd[759]: lo: Gained carrier Mar 20 21:27:21.976884 systemd-networkd[759]: Enumeration completed Mar 20 21:27:21.977281 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:27:21.977284 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:27:21.977691 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:27:21.977883 systemd-networkd[759]: eth0: Link UP Mar 20 21:27:21.985057 ignition[664]: Ignition 2.20.0 Mar 20 21:27:21.977886 systemd-networkd[759]: eth0: Gained carrier Mar 20 21:27:21.985064 ignition[664]: Stage: fetch-offline Mar 20 21:27:21.977905 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:27:21.985100 ignition[664]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:21.978680 systemd[1]: Reached target network.target - Network. Mar 20 21:27:21.985108 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:21.985308 ignition[664]: parsed url from cmdline: "" Mar 20 21:27:21.985312 ignition[664]: no config URL provided Mar 20 21:27:21.985316 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:27:21.985323 ignition[664]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:27:21.985347 ignition[664]: op(1): [started] loading QEMU firmware config module Mar 20 21:27:21.985351 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:27:21.998856 ignition[664]: op(1): [finished] loading QEMU firmware config module Mar 20 21:27:21.999940 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:27:22.038514 ignition[664]: parsing config with SHA512: 9b01ae91d0e8825ff0f71b2977d204f94598766f08254b83fcf2b4c9c6be299a570907a7feb801dc1333ba4ce595a4932982b98619e68a8e02ec058ac7183d19 Mar 20 21:27:22.044279 unknown[664]: fetched base config from "system" Mar 20 21:27:22.044294 unknown[664]: fetched user config from "qemu" Mar 20 21:27:22.047144 ignition[664]: fetch-offline: fetch-offline passed Mar 20 21:27:22.047233 ignition[664]: Ignition finished successfully Mar 20 21:27:22.049402 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:27:22.050824 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:27:22.051615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:27:22.079009 ignition[773]: Ignition 2.20.0 Mar 20 21:27:22.079019 ignition[773]: Stage: kargs Mar 20 21:27:22.079185 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:22.079194 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:22.081889 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:27:22.080043 ignition[773]: kargs: kargs passed Mar 20 21:27:22.083879 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:27:22.080093 ignition[773]: Ignition finished successfully Mar 20 21:27:22.113098 ignition[782]: Ignition 2.20.0 Mar 20 21:27:22.113120 ignition[782]: Stage: disks Mar 20 21:27:22.113274 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:22.116145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:27:22.113284 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:22.117036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:27:22.114124 ignition[782]: disks: disks passed Mar 20 21:27:22.118621 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:27:22.114169 ignition[782]: Ignition finished successfully Mar 20 21:27:22.120531 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:27:22.122290 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:27:22.123557 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:27:22.126028 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:27:22.154825 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:27:22.158601 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:27:22.161087 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:27:22.221918 kernel: EXT4-fs (vda9): mounted filesystem 78c526d9-91af-4481-a769-6d3064caa829 r/w with ordered data mode. Quota mode: none. Mar 20 21:27:22.227314 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:27:22.228635 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:27:22.231716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:27:22.233491 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:27:22.234449 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:27:22.234544 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:27:22.234592 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:27:22.245950 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:27:22.248191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:27:22.252558 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (800) Mar 20 21:27:22.252602 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:27:22.252613 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:27:22.252623 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:27:22.256922 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:27:22.262000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:27:22.302533 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:27:22.305866 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:27:22.310080 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:27:22.313785 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:27:22.387108 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:27:22.390743 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:27:22.392423 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:27:22.410935 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:27:22.432096 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:27:22.441805 ignition[915]: INFO : Ignition 2.20.0 Mar 20 21:27:22.441805 ignition[915]: INFO : Stage: mount Mar 20 21:27:22.443439 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:22.443439 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:22.443439 ignition[915]: INFO : mount: mount passed Mar 20 21:27:22.443439 ignition[915]: INFO : Ignition finished successfully Mar 20 21:27:22.445960 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:27:22.448266 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:27:22.953978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:27:22.955443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:27:22.975917 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (929) Mar 20 21:27:22.978217 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:27:22.978240 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:27:22.978250 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:27:22.979910 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:27:22.981130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:27:23.009794 ignition[946]: INFO : Ignition 2.20.0 Mar 20 21:27:23.009794 ignition[946]: INFO : Stage: files Mar 20 21:27:23.011374 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:23.011374 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:23.011374 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:27:23.014668 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:27:23.014668 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:27:23.017299 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:27:23.017299 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:27:23.017299 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:27:23.016684 unknown[946]: wrote ssh authorized keys file for user: core Mar 20 21:27:23.022233 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:27:23.022233 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 20 21:27:23.064411 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:27:23.199006 systemd-networkd[759]: eth0: Gained IPv6LL Mar 20 21:27:23.477990 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:27:23.479544 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:27:23.479544 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 20 21:27:23.815632 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 21:27:23.875199 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:27:23.876675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 20 21:27:24.148493 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 21:27:24.415120 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:27:24.415120 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 21:27:24.417887 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:27:24.433851 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:27:24.437135 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:27:24.439056 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:27:24.439056 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:27:24.439056 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:27:24.439056 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:27:24.439056 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:27:24.439056 ignition[946]: INFO : files: files passed Mar 20 21:27:24.439056 ignition[946]: INFO : Ignition finished successfully Mar 20 21:27:24.441249 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:27:24.444055 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:27:24.445457 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:27:24.456837 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:27:24.458005 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:27:24.458267 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:27:24.462699 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:27:24.462699 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:27:24.465054 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:27:24.465137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:27:24.467454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:27:24.469249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:27:24.524802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:27:24.524923 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:27:24.526822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:27:24.528397 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:27:24.529931 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:27:24.530686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:27:24.553874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:27:24.555945 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:27:24.582523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:27:24.583488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:27:24.585299 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:27:24.587069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:27:24.587190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:27:24.589674 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:27:24.590712 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:27:24.592378 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:27:24.594151 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:27:24.595600 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:27:24.597234 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:27:24.598813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:27:24.600569 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:27:24.602294 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:27:24.604182 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:27:24.605579 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:27:24.605715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:27:24.607823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:27:24.609516 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:27:24.611131 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:27:24.611996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:27:24.612926 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:27:24.613043 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:27:24.615546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:27:24.615674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:27:24.617671 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:27:24.618981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:27:24.619964 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:27:24.621625 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:27:24.623211 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:27:24.624593 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:27:24.624687 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:27:24.626188 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:27:24.626263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:27:24.628213 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:27:24.628322 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:27:24.629796 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:27:24.629900 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:27:24.632234 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:27:24.634222 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:27:24.634943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:27:24.635058 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:27:24.636653 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:27:24.636752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:27:24.651112 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:27:24.651195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:27:24.659085 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:27:24.660828 ignition[1001]: INFO : Ignition 2.20.0 Mar 20 21:27:24.660828 ignition[1001]: INFO : Stage: umount Mar 20 21:27:24.662426 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:27:24.662426 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:27:24.662426 ignition[1001]: INFO : umount: umount passed Mar 20 21:27:24.666016 ignition[1001]: INFO : Ignition finished successfully Mar 20 21:27:24.664113 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:27:24.664231 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:27:24.665497 systemd[1]: Stopped target network.target - Network. Mar 20 21:27:24.666624 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:27:24.666685 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:27:24.668347 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:27:24.668386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:27:24.670095 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:27:24.670145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:27:24.671660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:27:24.671699 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:27:24.673283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:27:24.675673 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:27:24.682423 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:27:24.683935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:27:24.686671 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:27:24.686943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:27:24.686980 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:27:24.690385 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:27:24.690612 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:27:24.690723 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:27:24.693342 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:27:24.693808 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:27:24.693871 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:27:24.696203 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:27:24.697620 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:27:24.697679 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:27:24.699723 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:27:24.699762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:27:24.702559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:27:24.702601 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:27:24.704384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:27:24.707130 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:27:24.725826 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:27:24.725978 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:27:24.729072 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:27:24.729166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:27:24.731505 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:27:24.731573 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:27:24.733219 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:27:24.733249 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:27:24.734710 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:27:24.734753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:27:24.737118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:27:24.737159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:27:24.739582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:27:24.739625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:27:24.741320 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:27:24.741361 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:27:24.743594 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:27:24.744458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:27:24.744529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:27:24.747312 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 21:27:24.747351 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:27:24.749127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:27:24.749169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:27:24.750990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:27:24.751027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:27:24.754245 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:27:24.754325 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:27:24.757783 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:27:24.757880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:27:24.759411 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:27:24.761753 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:27:24.770312 systemd[1]: Switching root. Mar 20 21:27:24.799702 systemd-journald[237]: Journal stopped Mar 20 21:27:25.525773 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 20 21:27:25.525832 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:27:25.525844 kernel: SELinux: policy capability open_perms=1 Mar 20 21:27:25.525862 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:27:25.525871 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:27:25.525881 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:27:25.525906 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:27:25.525919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:27:25.525928 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:27:25.525938 kernel: audit: type=1403 audit(1742506044.956:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:27:25.525953 systemd[1]: Successfully loaded SELinux policy in 34.537ms. Mar 20 21:27:25.525965 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.313ms. Mar 20 21:27:25.525979 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:27:25.525989 systemd[1]: Detected virtualization kvm. Mar 20 21:27:25.526000 systemd[1]: Detected architecture arm64. Mar 20 21:27:25.526010 systemd[1]: Detected first boot. Mar 20 21:27:25.526020 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:27:25.526030 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:27:25.526039 zram_generator::config[1046]: No configuration found. Mar 20 21:27:25.526051 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:27:25.526063 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:27:25.526074 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:27:25.526084 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:27:25.526098 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:27:25.526110 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:27:25.526121 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:27:25.526131 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:27:25.526142 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:27:25.526152 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:27:25.526165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:27:25.526175 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:27:25.526186 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:27:25.526196 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:27:25.526207 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:27:25.526218 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:27:25.526228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:27:25.526239 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:27:25.526251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:27:25.526262 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 21:27:25.526272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:27:25.526283 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:27:25.526294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:27:25.526304 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:27:25.526314 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:27:25.526325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:27:25.526338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:27:25.526349 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:27:25.526361 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:27:25.526371 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:27:25.526382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:27:25.526393 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:27:25.526403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:27:25.526414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:27:25.526424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:27:25.526435 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:27:25.526447 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:27:25.526457 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:27:25.526468 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:27:25.526478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:27:25.526488 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:27:25.526498 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:27:25.526510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:27:25.526520 systemd[1]: Reached target machines.target - Containers. Mar 20 21:27:25.526531 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:27:25.526542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:27:25.526553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:27:25.526565 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:27:25.526575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:27:25.526585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:27:25.526595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:27:25.526605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:27:25.526615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:27:25.526634 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:27:25.526646 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:27:25.526656 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:27:25.526667 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:27:25.526677 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:27:25.526687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:27:25.526698 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:27:25.526708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:27:25.526720 kernel: fuse: init (API version 7.39) Mar 20 21:27:25.526730 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:27:25.526740 kernel: ACPI: bus type drm_connector registered Mar 20 21:27:25.526749 kernel: loop: module loaded Mar 20 21:27:25.526759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:27:25.526769 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:27:25.526779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:27:25.526792 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:27:25.526802 systemd[1]: Stopped verity-setup.service. Mar 20 21:27:25.526814 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:27:25.526841 systemd-journald[1118]: Collecting audit messages is disabled. Mar 20 21:27:25.526862 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:27:25.526875 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:27:25.526885 systemd-journald[1118]: Journal started Mar 20 21:27:25.526914 systemd-journald[1118]: Runtime Journal (/run/log/journal/03d868965463453ba53bc8da51655b16) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:27:25.338710 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:27:25.351664 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:27:25.352059 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:27:25.528941 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:27:25.529492 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:27:25.530426 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:27:25.531450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:27:25.534142 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:27:25.535281 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:27:25.536478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:27:25.536655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:27:25.537823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:27:25.538032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:27:25.539210 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:27:25.539367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:27:25.540401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:27:25.540565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:27:25.541817 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:27:25.542174 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:27:25.543255 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:27:25.543413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:27:25.544566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:27:25.545943 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:27:25.547179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:27:25.549324 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:27:25.561344 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:27:25.563972 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:27:25.565843 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:27:25.566859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:27:25.566905 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:27:25.568648 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:27:25.578654 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:27:25.580518 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:27:25.581645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:27:25.582785 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:27:25.584447 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:27:25.585504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:27:25.586291 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:27:25.587194 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:27:25.591307 systemd-journald[1118]: Time spent on flushing to /var/log/journal/03d868965463453ba53bc8da51655b16 is 18.623ms for 869 entries. Mar 20 21:27:25.591307 systemd-journald[1118]: System Journal (/var/log/journal/03d868965463453ba53bc8da51655b16) is 8M, max 195.6M, 187.6M free. Mar 20 21:27:25.622970 systemd-journald[1118]: Received client request to flush runtime journal. Mar 20 21:27:25.623056 kernel: loop0: detected capacity change from 0 to 103832 Mar 20 21:27:25.623088 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:27:25.588052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:27:25.590406 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:27:25.593321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:27:25.597918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:27:25.599471 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:27:25.601408 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:27:25.602833 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:27:25.608798 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:27:25.627929 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:27:25.631493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:27:25.634033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:27:25.644390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:27:25.648569 kernel: loop1: detected capacity change from 0 to 126448 Mar 20 21:27:25.647140 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:27:25.650177 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 20 21:27:25.650460 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Mar 20 21:27:25.650478 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Mar 20 21:27:25.655333 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:27:25.665033 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:27:25.675072 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:27:25.684924 kernel: loop2: detected capacity change from 0 to 189592 Mar 20 21:27:25.696478 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:27:25.700399 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:27:25.715934 kernel: loop3: detected capacity change from 0 to 103832 Mar 20 21:27:25.720907 kernel: loop4: detected capacity change from 0 to 126448 Mar 20 21:27:25.724381 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 20 21:27:25.724396 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 20 21:27:25.726922 kernel: loop5: detected capacity change from 0 to 189592 Mar 20 21:27:25.729274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:27:25.732512 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:27:25.732918 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 20 21:27:25.736992 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:27:25.737006 systemd[1]: Reloading... Mar 20 21:27:25.800919 zram_generator::config[1221]: No configuration found. Mar 20 21:27:25.860977 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:27:25.891395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:27:25.941582 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:27:25.941987 systemd[1]: Reloading finished in 204 ms. Mar 20 21:27:25.969922 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:27:25.971073 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:27:25.987099 systemd[1]: Starting ensure-sysext.service... Mar 20 21:27:25.988845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:27:25.998997 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:27:25.999013 systemd[1]: Reloading... Mar 20 21:27:26.009415 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:27:26.009605 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:27:26.010251 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:27:26.010459 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 20 21:27:26.010504 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 20 21:27:26.013451 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:27:26.013553 systemd-tmpfiles[1258]: Skipping /boot Mar 20 21:27:26.022394 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:27:26.022501 systemd-tmpfiles[1258]: Skipping /boot Mar 20 21:27:26.056914 zram_generator::config[1284]: No configuration found. Mar 20 21:27:26.136355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:27:26.186202 systemd[1]: Reloading finished in 186 ms. Mar 20 21:27:26.197725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:27:26.214094 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:27:26.221024 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:27:26.223161 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:27:26.232131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:27:26.238176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:27:26.240445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:27:26.242593 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:27:26.246708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:27:26.257228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:27:26.259420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:27:26.262197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:27:26.263246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:27:26.263358 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:27:26.266292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:27:26.270072 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:27:26.272496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:27:26.272825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:27:26.275269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:27:26.275421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:27:26.282359 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:27:26.282563 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:27:26.291311 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:27:26.293983 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Mar 20 21:27:26.295555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:27:26.298405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:27:26.310304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:27:26.313854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:27:26.320237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:27:26.321358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:27:26.321478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:27:26.324982 augenrules[1384]: No rules Mar 20 21:27:26.331311 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:27:26.333373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:27:26.334968 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:27:26.336490 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:27:26.336712 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:27:26.338598 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:27:26.341634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:27:26.341963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:27:26.343665 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:27:26.343850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:27:26.345998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:27:26.346209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:27:26.347884 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:27:26.348097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:27:26.356059 systemd[1]: Finished ensure-sysext.service. Mar 20 21:27:26.357247 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:27:26.368510 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 21:27:26.372166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:27:26.373316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:27:26.373377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:27:26.376165 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:27:26.378959 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:27:26.386950 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1359) Mar 20 21:27:26.414213 systemd-resolved[1327]: Positive Trust Anchors: Mar 20 21:27:26.414230 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:27:26.414261 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:27:26.420264 systemd-resolved[1327]: Defaulting to hostname 'linux'. Mar 20 21:27:26.423990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:27:26.425179 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:27:26.431379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:27:26.433942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:27:26.452590 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:27:26.454400 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:27:26.461604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:27:26.469590 systemd-networkd[1401]: lo: Link UP Mar 20 21:27:26.469599 systemd-networkd[1401]: lo: Gained carrier Mar 20 21:27:26.470438 systemd-networkd[1401]: Enumeration completed Mar 20 21:27:26.470542 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:27:26.472072 systemd[1]: Reached target network.target - Network. Mar 20 21:27:26.474843 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:27:26.477746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:27:26.478067 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:27:26.478071 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:27:26.480904 systemd-networkd[1401]: eth0: Link UP Mar 20 21:27:26.480913 systemd-networkd[1401]: eth0: Gained carrier Mar 20 21:27:26.480927 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:27:26.498569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:27:26.501027 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:27:26.501551 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 20 21:27:26.502030 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:27:26.502075 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-03-20 21:27:26.146847 UTC. Mar 20 21:27:26.511377 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:27:26.513386 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:27:26.518336 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:27:26.533935 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:27:26.554582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:27:26.573175 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:27:26.574578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:27:26.575751 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:27:26.576963 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:27:26.578214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:27:26.579659 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:27:26.580919 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:27:26.582151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:27:26.583356 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:27:26.583391 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:27:26.584442 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:27:26.586227 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:27:26.588581 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:27:26.591789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:27:26.593205 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:27:26.594457 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:27:26.597775 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:27:26.599246 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:27:26.601509 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:27:26.603118 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:27:26.604212 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:27:26.605208 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:27:26.606167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:27:26.606201 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:27:26.607116 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:27:26.609931 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:27:26.609060 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:27:26.613906 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:27:26.616167 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:27:26.619975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:27:26.621646 jq[1432]: false Mar 20 21:27:26.621009 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:27:26.624082 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:27:26.632354 dbus-daemon[1431]: [system] SELinux support is enabled Mar 20 21:27:26.634295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:27:26.637863 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:27:26.640935 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:27:26.642848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:27:26.643255 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:27:26.648050 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:27:26.653233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:27:26.654701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:27:26.659322 extend-filesystems[1433]: Found loop3 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found loop4 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found loop5 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda1 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda2 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda3 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found usr Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda4 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda6 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda7 Mar 20 21:27:26.660592 extend-filesystems[1433]: Found vda9 Mar 20 21:27:26.660592 extend-filesystems[1433]: Checking size of /dev/vda9 Mar 20 21:27:26.659785 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:27:26.671572 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:27:26.680591 jq[1449]: true Mar 20 21:27:26.671778 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:27:26.672052 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:27:26.672206 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:27:26.684273 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:27:26.684452 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:27:26.688547 extend-filesystems[1433]: Resized partition /dev/vda9 Mar 20 21:27:26.698151 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:27:26.703689 jq[1456]: true Mar 20 21:27:26.707021 update_engine[1442]: I20250320 21:27:26.706627 1442 main.cc:92] Flatcar Update Engine starting Mar 20 21:27:26.709008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1361) Mar 20 21:27:26.713056 extend-filesystems[1458]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:27:26.718988 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:27:26.723161 update_engine[1442]: I20250320 21:27:26.723096 1442 update_check_scheduler.cc:74] Next update check in 4m24s Mar 20 21:27:26.730077 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:27:26.737423 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:27:26.737456 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:27:26.737823 tar[1453]: linux-arm64/helm Mar 20 21:27:26.742139 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:27:26.742166 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:27:26.746944 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:27:26.747025 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:27:26.759926 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 21:27:26.770262 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:27:26.770262 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:27:26.770262 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:27:26.760149 systemd-logind[1440]: New seat seat0. Mar 20 21:27:26.778312 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:27:26.778390 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Mar 20 21:27:26.767874 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:27:26.769762 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:27:26.771945 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:27:26.779988 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:27:26.783393 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:27:26.828496 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:27:26.913928 containerd[1457]: time="2025-03-20T21:27:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:27:26.915104 containerd[1457]: time="2025-03-20T21:27:26.915061240Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:27:26.924688 containerd[1457]: time="2025-03-20T21:27:26.924654880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.68µs" Mar 20 21:27:26.924688 containerd[1457]: time="2025-03-20T21:27:26.924681960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:27:26.924763 containerd[1457]: time="2025-03-20T21:27:26.924700440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:27:26.924860 containerd[1457]: time="2025-03-20T21:27:26.924833240Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:27:26.924913 containerd[1457]: time="2025-03-20T21:27:26.924861160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:27:26.924913 containerd[1457]: time="2025-03-20T21:27:26.924886720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925003 containerd[1457]: time="2025-03-20T21:27:26.924954600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925003 containerd[1457]: time="2025-03-20T21:27:26.924971240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925277 containerd[1457]: time="2025-03-20T21:27:26.925234120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925277 containerd[1457]: time="2025-03-20T21:27:26.925256120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925277 containerd[1457]: time="2025-03-20T21:27:26.925267560Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925277 containerd[1457]: time="2025-03-20T21:27:26.925275360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925358 containerd[1457]: time="2025-03-20T21:27:26.925348120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925543 containerd[1457]: time="2025-03-20T21:27:26.925523160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925570 containerd[1457]: time="2025-03-20T21:27:26.925557440Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:27:26.925570 containerd[1457]: time="2025-03-20T21:27:26.925567480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:27:26.925622 containerd[1457]: time="2025-03-20T21:27:26.925599960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:27:26.926087 containerd[1457]: time="2025-03-20T21:27:26.925853600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:27:26.926087 containerd[1457]: time="2025-03-20T21:27:26.925955560Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:27:26.929345 containerd[1457]: time="2025-03-20T21:27:26.929318240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:27:26.929453 containerd[1457]: time="2025-03-20T21:27:26.929436960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:27:26.929542 containerd[1457]: time="2025-03-20T21:27:26.929525760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:27:26.929685 containerd[1457]: time="2025-03-20T21:27:26.929666440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:27:26.929772 containerd[1457]: time="2025-03-20T21:27:26.929755480Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:27:26.929862 containerd[1457]: time="2025-03-20T21:27:26.929846760Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929921360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929941080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929952360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929963280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929972360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.929985880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930095600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930116160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930128520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930139040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930149400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930160320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930172680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930182760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:27:26.930637 containerd[1457]: time="2025-03-20T21:27:26.930195040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:27:26.930911 containerd[1457]: time="2025-03-20T21:27:26.930205480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:27:26.930911 containerd[1457]: time="2025-03-20T21:27:26.930216160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:27:26.930911 containerd[1457]: time="2025-03-20T21:27:26.930462680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:27:26.930911 containerd[1457]: time="2025-03-20T21:27:26.930476920Z" level=info msg="Start snapshots syncer" Mar 20 21:27:26.930911 containerd[1457]: time="2025-03-20T21:27:26.930510520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:27:26.931381 containerd[1457]: time="2025-03-20T21:27:26.931342640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:27:26.931649 containerd[1457]: time="2025-03-20T21:27:26.931626240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:27:26.931834 containerd[1457]: time="2025-03-20T21:27:26.931814240Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:27:26.932149 containerd[1457]: time="2025-03-20T21:27:26.932128280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:27:26.932286 containerd[1457]: time="2025-03-20T21:27:26.932268640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:27:26.932345 containerd[1457]: time="2025-03-20T21:27:26.932332000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:27:26.932468 containerd[1457]: time="2025-03-20T21:27:26.932449240Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:27:26.932570 containerd[1457]: time="2025-03-20T21:27:26.932554520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:27:26.932636 containerd[1457]: time="2025-03-20T21:27:26.932622440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:27:26.932745 containerd[1457]: time="2025-03-20T21:27:26.932728760Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:27:26.932871 containerd[1457]: time="2025-03-20T21:27:26.932853520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:27:26.933131 containerd[1457]: time="2025-03-20T21:27:26.932937440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:27:26.933131 containerd[1457]: time="2025-03-20T21:27:26.933003520Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:27:26.933928 containerd[1457]: time="2025-03-20T21:27:26.933832640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:27:26.933928 containerd[1457]: time="2025-03-20T21:27:26.933864760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:27:26.933928 containerd[1457]: time="2025-03-20T21:27:26.933874600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:27:26.933928 containerd[1457]: time="2025-03-20T21:27:26.933884400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:27:26.934400 containerd[1457]: time="2025-03-20T21:27:26.934368680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934467920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934518440Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934613000Z" level=info msg="runtime interface created" Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934620760Z" level=info msg="created NRI interface" Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934634600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934648200Z" level=info msg="Connect containerd service" Mar 20 21:27:26.935092 containerd[1457]: time="2025-03-20T21:27:26.934693920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:27:26.936125 containerd[1457]: time="2025-03-20T21:27:26.936093880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:27:27.046223 containerd[1457]: time="2025-03-20T21:27:27.046173346Z" level=info msg="Start subscribing containerd event" Mar 20 21:27:27.046394 containerd[1457]: time="2025-03-20T21:27:27.046378879Z" level=info msg="Start recovering state" Mar 20 21:27:27.046554 containerd[1457]: time="2025-03-20T21:27:27.046539421Z" level=info msg="Start event monitor" Mar 20 21:27:27.046623 containerd[1457]: time="2025-03-20T21:27:27.046612124Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:27:27.046696 containerd[1457]: time="2025-03-20T21:27:27.046685859Z" level=info msg="Start streaming server" Mar 20 21:27:27.046759 containerd[1457]: time="2025-03-20T21:27:27.046731078Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:27:27.046802 containerd[1457]: time="2025-03-20T21:27:27.046791741Z" level=info msg="runtime interface starting up..." Mar 20 21:27:27.046856 containerd[1457]: time="2025-03-20T21:27:27.046845102Z" level=info msg="starting plugins..." Mar 20 21:27:27.046919 containerd[1457]: time="2025-03-20T21:27:27.046907331Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:27:27.046991 containerd[1457]: time="2025-03-20T21:27:27.046734289Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:27:27.047041 containerd[1457]: time="2025-03-20T21:27:27.047025980Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:27:27.047186 containerd[1457]: time="2025-03-20T21:27:27.047170201Z" level=info msg="containerd successfully booted in 0.133752s" Mar 20 21:27:27.047264 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:27:27.099017 tar[1453]: linux-arm64/LICENSE Mar 20 21:27:27.099137 tar[1453]: linux-arm64/README.md Mar 20 21:27:27.115827 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:27:27.632836 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:27:27.650205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:27:27.652959 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:27:27.671917 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:27:27.672125 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:27:27.674289 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:27:27.690499 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:27:27.692770 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:27:27.694494 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 21:27:27.695507 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:27:28.063034 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 20 21:27:28.065151 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:27:28.066466 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:27:28.068481 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:27:28.070332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:28.072036 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:27:28.092871 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:27:28.093075 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:27:28.094236 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:27:28.096752 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:27:28.547173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:28.548571 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:27:28.550155 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:27:28.554102 systemd[1]: Startup finished in 523ms (kernel) + 5.260s (initrd) + 3.634s (userspace) = 9.418s. Mar 20 21:27:28.947120 kubelet[1557]: E0320 21:27:28.947015 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:27:28.949663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:27:28.949816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:27:28.950132 systemd[1]: kubelet.service: Consumed 764ms CPU time, 234.8M memory peak. Mar 20 21:27:32.354192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:27:32.355224 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Mar 20 21:27:32.470317 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:32.471989 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:32.482918 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:27:32.483883 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:27:32.488704 systemd-logind[1440]: New session 1 of user core. Mar 20 21:27:32.507200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:27:32.511377 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:27:32.516857 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:27:32.518909 systemd-logind[1440]: New session c1 of user core. Mar 20 21:27:32.624495 systemd[1574]: Queued start job for default target default.target. Mar 20 21:27:32.632670 systemd[1574]: Created slice app.slice - User Application Slice. Mar 20 21:27:32.632695 systemd[1574]: Reached target paths.target - Paths. Mar 20 21:27:32.632732 systemd[1574]: Reached target timers.target - Timers. Mar 20 21:27:32.633818 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:27:32.641392 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:27:32.641447 systemd[1574]: Reached target sockets.target - Sockets. Mar 20 21:27:32.641480 systemd[1574]: Reached target basic.target - Basic System. Mar 20 21:27:32.641506 systemd[1574]: Reached target default.target - Main User Target. Mar 20 21:27:32.641529 systemd[1574]: Startup finished in 118ms. Mar 20 21:27:32.641636 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:27:32.642866 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:27:32.699833 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:52644.service - OpenSSH per-connection server daemon (10.0.0.1:52644). Mar 20 21:27:32.755289 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 52644 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:32.756342 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:32.760415 systemd-logind[1440]: New session 2 of user core. Mar 20 21:27:32.776022 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:27:32.828094 sshd[1587]: Connection closed by 10.0.0.1 port 52644 Mar 20 21:27:32.828465 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:32.841750 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:52644.service: Deactivated successfully. Mar 20 21:27:32.843005 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:27:32.845086 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:27:32.846084 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:52652.service - OpenSSH per-connection server daemon (10.0.0.1:52652). Mar 20 21:27:32.846807 systemd-logind[1440]: Removed session 2. Mar 20 21:27:32.893530 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 52652 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:32.894434 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:32.898054 systemd-logind[1440]: New session 3 of user core. Mar 20 21:27:32.910066 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:27:32.955446 sshd[1595]: Connection closed by 10.0.0.1 port 52652 Mar 20 21:27:32.955682 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:32.966813 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:52652.service: Deactivated successfully. Mar 20 21:27:32.968185 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:27:32.971050 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:27:32.972033 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). Mar 20 21:27:32.972672 systemd-logind[1440]: Removed session 3. Mar 20 21:27:33.012390 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:33.013365 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:33.017216 systemd-logind[1440]: New session 4 of user core. Mar 20 21:27:33.029002 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:27:33.077445 sshd[1603]: Connection closed by 10.0.0.1 port 52658 Mar 20 21:27:33.077817 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:33.088793 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:52658.service: Deactivated successfully. Mar 20 21:27:33.090172 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:27:33.091305 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:27:33.092301 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:52674.service - OpenSSH per-connection server daemon (10.0.0.1:52674). Mar 20 21:27:33.093029 systemd-logind[1440]: Removed session 4. Mar 20 21:27:33.140306 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 52674 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:33.141267 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:33.145053 systemd-logind[1440]: New session 5 of user core. Mar 20 21:27:33.152017 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:27:33.215673 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:27:33.215963 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:27:33.228552 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 20 21:27:33.229773 sshd[1611]: Connection closed by 10.0.0.1 port 52674 Mar 20 21:27:33.230219 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:33.252806 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:52674.service: Deactivated successfully. Mar 20 21:27:33.254170 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:27:33.254823 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:27:33.256487 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:52680.service - OpenSSH per-connection server daemon (10.0.0.1:52680). Mar 20 21:27:33.257278 systemd-logind[1440]: Removed session 5. Mar 20 21:27:33.304848 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 52680 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:33.305904 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:33.309767 systemd-logind[1440]: New session 6 of user core. Mar 20 21:27:33.317027 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:27:33.366151 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:27:33.366410 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:27:33.369268 sudo[1622]: pam_unix(sudo:session): session closed for user root Mar 20 21:27:33.373781 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:27:33.374057 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:27:33.381476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:27:33.416958 augenrules[1644]: No rules Mar 20 21:27:33.417540 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:27:33.418950 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:27:33.420020 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 20 21:27:33.420938 sshd[1620]: Connection closed by 10.0.0.1 port 52680 Mar 20 21:27:33.421288 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:33.430721 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:52680.service: Deactivated successfully. Mar 20 21:27:33.432277 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:27:33.433616 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:27:33.434764 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:52688.service - OpenSSH per-connection server daemon (10.0.0.1:52688). Mar 20 21:27:33.436657 systemd-logind[1440]: Removed session 6. Mar 20 21:27:33.486947 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 52688 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:27:33.488050 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:33.492091 systemd-logind[1440]: New session 7 of user core. Mar 20 21:27:33.504015 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:27:33.552036 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:27:33.552293 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:27:33.878103 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:27:33.890215 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:27:34.130743 dockerd[1676]: time="2025-03-20T21:27:34.130626116Z" level=info msg="Starting up" Mar 20 21:27:34.132578 dockerd[1676]: time="2025-03-20T21:27:34.132549434Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:27:34.212293 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4230760296-merged.mount: Deactivated successfully. Mar 20 21:27:34.227842 dockerd[1676]: time="2025-03-20T21:27:34.227806181Z" level=info msg="Loading containers: start." Mar 20 21:27:34.358907 kernel: Initializing XFRM netlink socket Mar 20 21:27:34.415660 systemd-networkd[1401]: docker0: Link UP Mar 20 21:27:34.473061 dockerd[1676]: time="2025-03-20T21:27:34.473008963Z" level=info msg="Loading containers: done." Mar 20 21:27:34.485369 dockerd[1676]: time="2025-03-20T21:27:34.484973424Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:27:34.485369 dockerd[1676]: time="2025-03-20T21:27:34.485054898Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:27:34.485369 dockerd[1676]: time="2025-03-20T21:27:34.485213760Z" level=info msg="Daemon has completed initialization" Mar 20 21:27:34.512798 dockerd[1676]: time="2025-03-20T21:27:34.512745760Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:27:34.512921 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:27:35.295974 containerd[1457]: time="2025-03-20T21:27:35.295932916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 20 21:27:35.945803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068931571.mount: Deactivated successfully. Mar 20 21:27:37.463278 containerd[1457]: time="2025-03-20T21:27:37.463213975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:37.464218 containerd[1457]: time="2025-03-20T21:27:37.463763521Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 20 21:27:37.464995 containerd[1457]: time="2025-03-20T21:27:37.464959826Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:37.467658 containerd[1457]: time="2025-03-20T21:27:37.467628731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:37.469273 containerd[1457]: time="2025-03-20T21:27:37.469200756Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.173228854s" Mar 20 21:27:37.469273 containerd[1457]: time="2025-03-20T21:27:37.469235624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 20 21:27:37.469933 containerd[1457]: time="2025-03-20T21:27:37.469889261Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 20 21:27:38.955500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:27:38.957554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:39.078122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:39.081635 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:27:39.118865 kubelet[1950]: E0320 21:27:39.118809 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:27:39.122938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:27:39.123212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:27:39.125064 systemd[1]: kubelet.service: Consumed 134ms CPU time, 97.2M memory peak. Mar 20 21:27:39.241095 containerd[1457]: time="2025-03-20T21:27:39.240969243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:39.241928 containerd[1457]: time="2025-03-20T21:27:39.241765460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 20 21:27:39.242421 containerd[1457]: time="2025-03-20T21:27:39.242394782Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:39.246055 containerd[1457]: time="2025-03-20T21:27:39.246025944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:39.246669 containerd[1457]: time="2025-03-20T21:27:39.246577330Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.776639013s" Mar 20 21:27:39.246669 containerd[1457]: time="2025-03-20T21:27:39.246615069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 20 21:27:39.247210 containerd[1457]: time="2025-03-20T21:27:39.247156029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 20 21:27:40.606096 containerd[1457]: time="2025-03-20T21:27:40.606043705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:40.606724 containerd[1457]: time="2025-03-20T21:27:40.606674015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 20 21:27:40.608029 containerd[1457]: time="2025-03-20T21:27:40.608001984Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:40.610387 containerd[1457]: time="2025-03-20T21:27:40.610323479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:40.611458 containerd[1457]: time="2025-03-20T21:27:40.611414953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.364225178s" Mar 20 21:27:40.611458 containerd[1457]: time="2025-03-20T21:27:40.611446544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 20 21:27:40.611925 containerd[1457]: time="2025-03-20T21:27:40.611867148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 21:27:41.770851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844821522.mount: Deactivated successfully. Mar 20 21:27:41.977181 containerd[1457]: time="2025-03-20T21:27:41.977133402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:41.977767 containerd[1457]: time="2025-03-20T21:27:41.977713881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 20 21:27:41.978323 containerd[1457]: time="2025-03-20T21:27:41.978287289Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:41.980654 containerd[1457]: time="2025-03-20T21:27:41.980614767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:41.981211 containerd[1457]: time="2025-03-20T21:27:41.981002693Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.369095096s" Mar 20 21:27:41.981211 containerd[1457]: time="2025-03-20T21:27:41.981030541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 20 21:27:41.981575 containerd[1457]: time="2025-03-20T21:27:41.981537249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 21:27:42.562655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089919962.mount: Deactivated successfully. Mar 20 21:27:43.387290 containerd[1457]: time="2025-03-20T21:27:43.387234117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:43.388156 containerd[1457]: time="2025-03-20T21:27:43.388043453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 20 21:27:43.388779 containerd[1457]: time="2025-03-20T21:27:43.388746389Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:43.391730 containerd[1457]: time="2025-03-20T21:27:43.391702256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:43.392781 containerd[1457]: time="2025-03-20T21:27:43.392730797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.411162232s" Mar 20 21:27:43.392781 containerd[1457]: time="2025-03-20T21:27:43.392759486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 20 21:27:43.393267 containerd[1457]: time="2025-03-20T21:27:43.393218468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 21:27:43.925475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635924201.mount: Deactivated successfully. Mar 20 21:27:43.929329 containerd[1457]: time="2025-03-20T21:27:43.929290267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:27:43.929940 containerd[1457]: time="2025-03-20T21:27:43.929734686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 21:27:43.930599 containerd[1457]: time="2025-03-20T21:27:43.930567657Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:27:43.932540 containerd[1457]: time="2025-03-20T21:27:43.932502742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:27:43.933293 containerd[1457]: time="2025-03-20T21:27:43.933216382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 539.963297ms" Mar 20 21:27:43.933293 containerd[1457]: time="2025-03-20T21:27:43.933248772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 21:27:43.933657 containerd[1457]: time="2025-03-20T21:27:43.933637484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 20 21:27:44.402568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825774720.mount: Deactivated successfully. Mar 20 21:27:47.519143 containerd[1457]: time="2025-03-20T21:27:47.519089154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:47.519617 containerd[1457]: time="2025-03-20T21:27:47.519564330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 20 21:27:47.520609 containerd[1457]: time="2025-03-20T21:27:47.520558108Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:47.523171 containerd[1457]: time="2025-03-20T21:27:47.523141905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:47.524393 containerd[1457]: time="2025-03-20T21:27:47.524349624Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.59060496s" Mar 20 21:27:47.524393 containerd[1457]: time="2025-03-20T21:27:47.524385712Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 20 21:27:49.205354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 21:27:49.207395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:49.325238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:49.339168 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:27:49.372231 kubelet[2104]: E0320 21:27:49.372180 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:27:49.374877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:27:49.375064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:27:49.375333 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.9M memory peak. Mar 20 21:27:53.131256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:53.131397 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.9M memory peak. Mar 20 21:27:53.133366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:53.159322 systemd[1]: Reload requested from client PID 2119 ('systemctl') (unit session-7.scope)... Mar 20 21:27:53.159337 systemd[1]: Reloading... Mar 20 21:27:53.235931 zram_generator::config[2164]: No configuration found. Mar 20 21:27:53.444673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:27:53.515320 systemd[1]: Reloading finished in 355 ms. Mar 20 21:27:53.556789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:53.559497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:53.560059 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:27:53.560250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:53.560285 systemd[1]: kubelet.service: Consumed 82ms CPU time, 82.5M memory peak. Mar 20 21:27:53.561584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:53.665624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:53.669650 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:27:53.707502 kubelet[2210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:27:53.707793 kubelet[2210]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:27:53.707832 kubelet[2210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:27:53.708140 kubelet[2210]: I0320 21:27:53.708102 2210 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:27:54.283498 kubelet[2210]: I0320 21:27:54.283454 2210 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:27:54.283498 kubelet[2210]: I0320 21:27:54.283486 2210 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:27:54.283757 kubelet[2210]: I0320 21:27:54.283726 2210 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:27:54.317555 kubelet[2210]: E0320 21:27:54.316814 2210 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:54.318935 kubelet[2210]: I0320 21:27:54.318911 2210 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:27:54.327917 kubelet[2210]: I0320 21:27:54.327900 2210 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:27:54.331495 kubelet[2210]: I0320 21:27:54.331471 2210 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:27:54.332326 kubelet[2210]: I0320 21:27:54.332297 2210 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:27:54.332470 kubelet[2210]: I0320 21:27:54.332445 2210 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:27:54.332624 kubelet[2210]: I0320 21:27:54.332469 2210 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:27:54.332765 kubelet[2210]: I0320 21:27:54.332755 2210 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:27:54.332787 kubelet[2210]: I0320 21:27:54.332767 2210 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:27:54.332962 kubelet[2210]: I0320 21:27:54.332952 2210 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:27:54.334806 kubelet[2210]: I0320 21:27:54.334550 2210 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:27:54.334806 kubelet[2210]: I0320 21:27:54.334594 2210 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:27:54.334806 kubelet[2210]: I0320 21:27:54.334685 2210 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:27:54.334806 kubelet[2210]: I0320 21:27:54.334694 2210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:27:54.336868 kubelet[2210]: W0320 21:27:54.336824 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 20 21:27:54.336963 kubelet[2210]: E0320 21:27:54.336873 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:54.337037 kubelet[2210]: I0320 21:27:54.337014 2210 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:27:54.337149 kubelet[2210]: W0320 21:27:54.337114 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 20 21:27:54.337285 kubelet[2210]: E0320 21:27:54.337226 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:54.338778 kubelet[2210]: I0320 21:27:54.338761 2210 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:27:54.339373 kubelet[2210]: W0320 21:27:54.339342 2210 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:27:54.340031 kubelet[2210]: I0320 21:27:54.340012 2210 server.go:1269] "Started kubelet" Mar 20 21:27:54.343402 kubelet[2210]: I0320 21:27:54.343263 2210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:27:54.346079 kubelet[2210]: I0320 21:27:54.346044 2210 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:27:54.346471 kubelet[2210]: I0320 21:27:54.346447 2210 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:27:54.346843 kubelet[2210]: E0320 21:27:54.346806 2210 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:27:54.347199 kubelet[2210]: I0320 21:27:54.347169 2210 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:27:54.347925 kubelet[2210]: I0320 21:27:54.347282 2210 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:27:54.347925 kubelet[2210]: I0320 21:27:54.347367 2210 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:27:54.347925 kubelet[2210]: E0320 21:27:54.347792 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Mar 20 21:27:54.348339 kubelet[2210]: W0320 21:27:54.348284 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 20 21:27:54.348339 kubelet[2210]: E0320 21:27:54.348336 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:54.348424 kubelet[2210]: I0320 21:27:54.347285 2210 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:27:54.348445 kubelet[2210]: I0320 21:27:54.348401 2210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:27:54.348612 kubelet[2210]: I0320 21:27:54.348586 2210 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:27:54.348649 kubelet[2210]: I0320 21:27:54.348596 2210 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:27:54.348963 kubelet[2210]: I0320 21:27:54.348701 2210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:27:54.349992 kubelet[2210]: I0320 21:27:54.349968 2210 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:27:54.353157 kubelet[2210]: E0320 21:27:54.352228 2210 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ea00adbfec95e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:27:54.339985758 +0000 UTC m=+0.667126195,LastTimestamp:2025-03-20 21:27:54.339985758 +0000 UTC m=+0.667126195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:27:54.359675 kubelet[2210]: I0320 21:27:54.359545 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:27:54.360511 kubelet[2210]: I0320 21:27:54.360484 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:27:54.360511 kubelet[2210]: I0320 21:27:54.360512 2210 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:27:54.360591 kubelet[2210]: I0320 21:27:54.360528 2210 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:27:54.360591 kubelet[2210]: E0320 21:27:54.360569 2210 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:27:54.361847 kubelet[2210]: W0320 21:27:54.361701 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 20 21:27:54.361847 kubelet[2210]: E0320 21:27:54.361738 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:54.363876 kubelet[2210]: I0320 21:27:54.363854 2210 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:27:54.363876 kubelet[2210]: I0320 21:27:54.363870 2210 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:27:54.364006 kubelet[2210]: I0320 21:27:54.363884 2210 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:27:54.427214 kubelet[2210]: I0320 21:27:54.427174 2210 policy_none.go:49] "None policy: Start" Mar 20 21:27:54.428098 kubelet[2210]: I0320 21:27:54.428069 2210 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:27:54.428098 kubelet[2210]: I0320 21:27:54.428107 2210 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:27:54.433922 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:27:54.447275 kubelet[2210]: E0320 21:27:54.447249 2210 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:27:54.450321 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:27:54.460640 kubelet[2210]: E0320 21:27:54.460610 2210 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:27:54.461982 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:27:54.462927 kubelet[2210]: I0320 21:27:54.462879 2210 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:27:54.463092 kubelet[2210]: I0320 21:27:54.463065 2210 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:27:54.463124 kubelet[2210]: I0320 21:27:54.463083 2210 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:27:54.463318 kubelet[2210]: I0320 21:27:54.463291 2210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:27:54.464593 kubelet[2210]: E0320 21:27:54.464562 2210 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:27:54.548475 kubelet[2210]: E0320 21:27:54.548384 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Mar 20 21:27:54.564338 kubelet[2210]: I0320 21:27:54.564303 2210 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:27:54.564643 kubelet[2210]: E0320 21:27:54.564610 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 20 21:27:54.667568 systemd[1]: Created slice kubepods-burstable-pod8dab2aa27438709ab5388889a21971b0.slice - libcontainer container kubepods-burstable-pod8dab2aa27438709ab5388889a21971b0.slice. Mar 20 21:27:54.699840 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 20 21:27:54.714271 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 20 21:27:54.765912 kubelet[2210]: I0320 21:27:54.765843 2210 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:27:54.766225 kubelet[2210]: E0320 21:27:54.766109 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 20 21:27:54.849999 kubelet[2210]: I0320 21:27:54.849907 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:27:54.849999 kubelet[2210]: I0320 21:27:54.849943 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:27:54.849999 kubelet[2210]: I0320 21:27:54.849960 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:27:54.849999 kubelet[2210]: I0320 21:27:54.849974 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:27:54.849999 kubelet[2210]: I0320 21:27:54.849989 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:27:54.850357 kubelet[2210]: I0320 21:27:54.850313 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:27:54.850388 kubelet[2210]: I0320 21:27:54.850359 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:27:54.850388 kubelet[2210]: I0320 21:27:54.850377 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:27:54.850433 kubelet[2210]: I0320 21:27:54.850393 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:27:54.949560 kubelet[2210]: E0320 21:27:54.949511 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Mar 20 21:27:54.997794 kubelet[2210]: E0320 21:27:54.997746 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:54.998268 containerd[1457]: time="2025-03-20T21:27:54.998225807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8dab2aa27438709ab5388889a21971b0,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:55.012584 kubelet[2210]: E0320 21:27:55.012564 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.013118 containerd[1457]: time="2025-03-20T21:27:55.013069438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:55.017003 containerd[1457]: time="2025-03-20T21:27:55.016968518Z" level=info msg="connecting to shim bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82" address="unix:///run/containerd/s/411dda706d274dd4f5f6a94291c1b4261e8e90f754a95f05fbb5fdf082ee088d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:55.017091 kubelet[2210]: E0320 21:27:55.016994 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.017778 containerd[1457]: time="2025-03-20T21:27:55.017562131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:55.037219 systemd[1]: Started cri-containerd-bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82.scope - libcontainer container bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82. Mar 20 21:27:55.040515 containerd[1457]: time="2025-03-20T21:27:55.040464689Z" level=info msg="connecting to shim a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f" address="unix:///run/containerd/s/fa7b0c35aec7da12999dd372f934efe7c5d612896e3e6c08b2941a6f7d54f3c5" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:55.053475 containerd[1457]: time="2025-03-20T21:27:55.053437300Z" level=info msg="connecting to shim 0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a" address="unix:///run/containerd/s/fad29a6a382d961be7fd4805df650d55bbbbd9bb919d89ac7ee0699d27eac378" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:55.070062 systemd[1]: Started cri-containerd-a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f.scope - libcontainer container a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f. Mar 20 21:27:55.074537 systemd[1]: Started cri-containerd-0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a.scope - libcontainer container 0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a. Mar 20 21:27:55.085859 containerd[1457]: time="2025-03-20T21:27:55.085736529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8dab2aa27438709ab5388889a21971b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82\"" Mar 20 21:27:55.087237 kubelet[2210]: E0320 21:27:55.087215 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.090788 containerd[1457]: time="2025-03-20T21:27:55.090748153Z" level=info msg="CreateContainer within sandbox \"bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:27:55.104973 containerd[1457]: time="2025-03-20T21:27:55.104473449Z" level=info msg="Container b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:55.110416 containerd[1457]: time="2025-03-20T21:27:55.110383324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f\"" Mar 20 21:27:55.111754 kubelet[2210]: E0320 21:27:55.111732 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.113438 containerd[1457]: time="2025-03-20T21:27:55.113409566Z" level=info msg="CreateContainer within sandbox \"a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:27:55.114747 containerd[1457]: time="2025-03-20T21:27:55.114652772Z" level=info msg="CreateContainer within sandbox \"bdf11306040c9a54796941afb789ab9cee0a53da401d4936af5c0b56f2709a82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06\"" Mar 20 21:27:55.115965 containerd[1457]: time="2025-03-20T21:27:55.115884391Z" level=info msg="StartContainer for \"b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06\"" Mar 20 21:27:55.117447 containerd[1457]: time="2025-03-20T21:27:55.117405703Z" level=info msg="connecting to shim b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06" address="unix:///run/containerd/s/411dda706d274dd4f5f6a94291c1b4261e8e90f754a95f05fbb5fdf082ee088d" protocol=ttrpc version=3 Mar 20 21:27:55.118622 containerd[1457]: time="2025-03-20T21:27:55.118591250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a\"" Mar 20 21:27:55.119676 kubelet[2210]: E0320 21:27:55.119615 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.121098 containerd[1457]: time="2025-03-20T21:27:55.121035907Z" level=info msg="CreateContainer within sandbox \"0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:27:55.121170 containerd[1457]: time="2025-03-20T21:27:55.121053249Z" level=info msg="Container cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:55.127693 containerd[1457]: time="2025-03-20T21:27:55.127605205Z" level=info msg="CreateContainer within sandbox \"a7b632634e1b5d1412ce364a4961d21efcd00ba34598ab93f7748c774ae6290f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c\"" Mar 20 21:27:55.128313 containerd[1457]: time="2025-03-20T21:27:55.128002106Z" level=info msg="StartContainer for \"cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c\"" Mar 20 21:27:55.129450 containerd[1457]: time="2025-03-20T21:27:55.129420846Z" level=info msg="connecting to shim cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c" address="unix:///run/containerd/s/fa7b0c35aec7da12999dd372f934efe7c5d612896e3e6c08b2941a6f7d54f3c5" protocol=ttrpc version=3 Mar 20 21:27:55.129868 containerd[1457]: time="2025-03-20T21:27:55.129830054Z" level=info msg="Container 0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:55.136040 systemd[1]: Started cri-containerd-b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06.scope - libcontainer container b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06. Mar 20 21:27:55.137237 containerd[1457]: time="2025-03-20T21:27:55.137194312Z" level=info msg="CreateContainer within sandbox \"0c93eb4292c01c0a13bf2956b1197984ed631d1067b47f918aa68dc60766a40a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736\"" Mar 20 21:27:55.138231 containerd[1457]: time="2025-03-20T21:27:55.138195414Z" level=info msg="StartContainer for \"0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736\"" Mar 20 21:27:55.139315 containerd[1457]: time="2025-03-20T21:27:55.139288339Z" level=info msg="connecting to shim 0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736" address="unix:///run/containerd/s/fad29a6a382d961be7fd4805df650d55bbbbd9bb919d89ac7ee0699d27eac378" protocol=ttrpc version=3 Mar 20 21:27:55.148065 systemd[1]: Started cri-containerd-cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c.scope - libcontainer container cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c. Mar 20 21:27:55.167162 systemd[1]: Started cri-containerd-0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736.scope - libcontainer container 0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736. Mar 20 21:27:55.167852 kubelet[2210]: I0320 21:27:55.167781 2210 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:27:55.168359 kubelet[2210]: E0320 21:27:55.168219 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 20 21:27:55.182753 containerd[1457]: time="2025-03-20T21:27:55.182714529Z" level=info msg="StartContainer for \"b09b50322b7d8fb17ba88babf73ee26aff0299f1db71f10d0865769fb0176b06\" returns successfully" Mar 20 21:27:55.211250 containerd[1457]: time="2025-03-20T21:27:55.209380990Z" level=info msg="StartContainer for \"cbaab171b89e65598bdd722913cff1c0c064c6acd26224f2532362c317706c0c\" returns successfully" Mar 20 21:27:55.219821 containerd[1457]: time="2025-03-20T21:27:55.219504812Z" level=info msg="StartContainer for \"0011ecbeed9cb2cd33abe957e1655f561829e4b120283714792ad03275371736\" returns successfully" Mar 20 21:27:55.306363 kubelet[2210]: W0320 21:27:55.306263 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 20 21:27:55.306363 kubelet[2210]: E0320 21:27:55.306336 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:27:55.368845 kubelet[2210]: E0320 21:27:55.368407 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.370397 kubelet[2210]: E0320 21:27:55.370325 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.373504 kubelet[2210]: E0320 21:27:55.373391 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:55.969854 kubelet[2210]: I0320 21:27:55.969832 2210 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:27:56.376303 kubelet[2210]: E0320 21:27:56.375807 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:56.377075 kubelet[2210]: E0320 21:27:56.377008 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:57.076232 kubelet[2210]: E0320 21:27:57.076103 2210 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:27:57.734912 kubelet[2210]: E0320 21:27:57.734867 2210 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 21:27:57.796687 kubelet[2210]: I0320 21:27:57.796640 2210 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:27:57.838448 kubelet[2210]: E0320 21:27:57.838336 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182ea00adbfec95e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:27:54.339985758 +0000 UTC m=+0.667126195,LastTimestamp:2025-03-20 21:27:54.339985758 +0000 UTC m=+0.667126195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:27:58.338426 kubelet[2210]: I0320 21:27:58.338304 2210 apiserver.go:52] "Watching apiserver" Mar 20 21:27:58.348664 kubelet[2210]: I0320 21:27:58.348620 2210 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:27:59.648428 systemd[1]: Reload requested from client PID 2483 ('systemctl') (unit session-7.scope)... Mar 20 21:27:59.648444 systemd[1]: Reloading... Mar 20 21:27:59.722953 zram_generator::config[2530]: No configuration found. Mar 20 21:27:59.801472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:27:59.883754 systemd[1]: Reloading finished in 234 ms. Mar 20 21:27:59.904129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:27:59.917785 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:27:59.919951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:27:59.920035 systemd[1]: kubelet.service: Consumed 1.058s CPU time, 117.5M memory peak. Mar 20 21:27:59.921915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:28:00.043723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:28:00.047158 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:28:00.084202 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:28:00.084202 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:28:00.084202 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:28:00.084202 kubelet[2569]: I0320 21:28:00.084121 2569 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:28:00.089492 kubelet[2569]: I0320 21:28:00.089461 2569 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:28:00.089492 kubelet[2569]: I0320 21:28:00.089489 2569 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:28:00.089697 kubelet[2569]: I0320 21:28:00.089682 2569 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:28:00.090992 kubelet[2569]: I0320 21:28:00.090968 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:28:00.092867 kubelet[2569]: I0320 21:28:00.092787 2569 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:28:00.098439 kubelet[2569]: I0320 21:28:00.098415 2569 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:28:00.100801 kubelet[2569]: I0320 21:28:00.100784 2569 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:28:00.100924 kubelet[2569]: I0320 21:28:00.100911 2569 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:28:00.101031 kubelet[2569]: I0320 21:28:00.101008 2569 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:28:00.101187 kubelet[2569]: I0320 21:28:00.101034 2569 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:28:00.101260 kubelet[2569]: I0320 21:28:00.101198 2569 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:28:00.101260 kubelet[2569]: I0320 21:28:00.101208 2569 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:28:00.101260 kubelet[2569]: I0320 21:28:00.101236 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:28:00.101344 kubelet[2569]: I0320 21:28:00.101332 2569 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:28:00.101369 kubelet[2569]: I0320 21:28:00.101347 2569 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:28:00.101369 kubelet[2569]: I0320 21:28:00.101368 2569 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:28:00.101407 kubelet[2569]: I0320 21:28:00.101378 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:28:00.102348 kubelet[2569]: I0320 21:28:00.102032 2569 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:28:00.102561 kubelet[2569]: I0320 21:28:00.102544 2569 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:28:00.103393 kubelet[2569]: I0320 21:28:00.102887 2569 server.go:1269] "Started kubelet" Mar 20 21:28:00.103393 kubelet[2569]: I0320 21:28:00.103202 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:28:00.103393 kubelet[2569]: I0320 21:28:00.103382 2569 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:28:00.104061 kubelet[2569]: I0320 21:28:00.103003 2569 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:28:00.104549 kubelet[2569]: I0320 21:28:00.104371 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:28:00.105405 kubelet[2569]: I0320 21:28:00.104781 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:28:00.105672 kubelet[2569]: I0320 21:28:00.105645 2569 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:28:00.105746 kubelet[2569]: I0320 21:28:00.105735 2569 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:28:00.107284 kubelet[2569]: I0320 21:28:00.105854 2569 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:28:00.107284 kubelet[2569]: I0320 21:28:00.106264 2569 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:28:00.107284 kubelet[2569]: E0320 21:28:00.106764 2569 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:28:00.107284 kubelet[2569]: I0320 21:28:00.107172 2569 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:28:00.107284 kubelet[2569]: I0320 21:28:00.107259 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:28:00.110841 kubelet[2569]: I0320 21:28:00.110805 2569 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:28:00.129035 kubelet[2569]: I0320 21:28:00.128984 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:28:00.132043 kubelet[2569]: I0320 21:28:00.132004 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:28:00.132043 kubelet[2569]: I0320 21:28:00.132039 2569 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:28:00.132220 kubelet[2569]: I0320 21:28:00.132058 2569 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:28:00.132220 kubelet[2569]: E0320 21:28:00.132104 2569 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:28:00.159868 kubelet[2569]: I0320 21:28:00.159771 2569 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:28:00.159868 kubelet[2569]: I0320 21:28:00.159810 2569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:28:00.159868 kubelet[2569]: I0320 21:28:00.159831 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:28:00.160655 kubelet[2569]: I0320 21:28:00.160610 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:28:00.160655 kubelet[2569]: I0320 21:28:00.160636 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:28:00.160655 kubelet[2569]: I0320 21:28:00.160657 2569 policy_none.go:49] "None policy: Start" Mar 20 21:28:00.161408 kubelet[2569]: I0320 21:28:00.161392 2569 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:28:00.161457 kubelet[2569]: I0320 21:28:00.161414 2569 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:28:00.161571 kubelet[2569]: I0320 21:28:00.161552 2569 state_mem.go:75] "Updated machine memory state" Mar 20 21:28:00.165259 kubelet[2569]: I0320 21:28:00.165229 2569 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:28:00.165424 kubelet[2569]: I0320 21:28:00.165399 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:28:00.165470 kubelet[2569]: I0320 21:28:00.165417 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:28:00.165691 kubelet[2569]: I0320 21:28:00.165602 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:28:00.269079 kubelet[2569]: I0320 21:28:00.269028 2569 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:28:00.276524 kubelet[2569]: I0320 21:28:00.276332 2569 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 20 21:28:00.276524 kubelet[2569]: I0320 21:28:00.276469 2569 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:28:00.307377 kubelet[2569]: I0320 21:28:00.307338 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:00.307377 kubelet[2569]: I0320 21:28:00.307377 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:00.307536 kubelet[2569]: I0320 21:28:00.307403 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:00.307536 kubelet[2569]: I0320 21:28:00.307419 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:28:00.307536 kubelet[2569]: I0320 21:28:00.307434 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:28:00.307536 kubelet[2569]: I0320 21:28:00.307449 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dab2aa27438709ab5388889a21971b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8dab2aa27438709ab5388889a21971b0\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:28:00.307536 kubelet[2569]: I0320 21:28:00.307469 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:00.307644 kubelet[2569]: I0320 21:28:00.307483 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:00.307644 kubelet[2569]: I0320 21:28:00.307500 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:28:00.544750 kubelet[2569]: E0320 21:28:00.544616 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:00.544750 kubelet[2569]: E0320 21:28:00.544657 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:00.544750 kubelet[2569]: E0320 21:28:00.544616 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:00.649921 sudo[2603]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 21:28:00.650196 sudo[2603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 21:28:01.071974 sudo[2603]: pam_unix(sudo:session): session closed for user root Mar 20 21:28:01.102543 kubelet[2569]: I0320 21:28:01.102479 2569 apiserver.go:52] "Watching apiserver" Mar 20 21:28:01.106208 kubelet[2569]: I0320 21:28:01.106182 2569 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:28:01.146301 kubelet[2569]: E0320 21:28:01.144757 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:01.149298 kubelet[2569]: E0320 21:28:01.149238 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:28:01.149450 kubelet[2569]: E0320 21:28:01.149417 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:01.150841 kubelet[2569]: E0320 21:28:01.150754 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 20 21:28:01.150918 kubelet[2569]: E0320 21:28:01.150873 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:01.159083 kubelet[2569]: I0320 21:28:01.158971 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.158958011 podStartE2EDuration="1.158958011s" podCreationTimestamp="2025-03-20 21:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:01.15136653 +0000 UTC m=+1.101255172" watchObservedRunningTime="2025-03-20 21:28:01.158958011 +0000 UTC m=+1.108846653" Mar 20 21:28:01.159708 kubelet[2569]: I0320 21:28:01.159172 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.159165549 podStartE2EDuration="1.159165549s" podCreationTimestamp="2025-03-20 21:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:01.158826694 +0000 UTC m=+1.108715336" watchObservedRunningTime="2025-03-20 21:28:01.159165549 +0000 UTC m=+1.109054151" Mar 20 21:28:01.166296 kubelet[2569]: I0320 21:28:01.165794 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.165777676 podStartE2EDuration="1.165777676s" podCreationTimestamp="2025-03-20 21:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:01.165590823 +0000 UTC m=+1.115479465" watchObservedRunningTime="2025-03-20 21:28:01.165777676 +0000 UTC m=+1.115666318" Mar 20 21:28:02.146403 kubelet[2569]: E0320 21:28:02.146347 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:02.147139 kubelet[2569]: E0320 21:28:02.146863 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:02.147205 kubelet[2569]: E0320 21:28:02.147168 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:02.902313 sudo[1656]: pam_unix(sudo:session): session closed for user root Mar 20 21:28:02.903936 sshd[1655]: Connection closed by 10.0.0.1 port 52688 Mar 20 21:28:02.903800 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:02.906561 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:52688.service: Deactivated successfully. Mar 20 21:28:02.908371 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:28:02.909104 systemd[1]: session-7.scope: Consumed 7.976s CPU time, 261.2M memory peak. Mar 20 21:28:02.910740 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:28:02.911929 systemd-logind[1440]: Removed session 7. Mar 20 21:28:03.147983 kubelet[2569]: E0320 21:28:03.147948 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:04.367286 kubelet[2569]: E0320 21:28:04.367254 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:06.519852 kubelet[2569]: I0320 21:28:06.519691 2569 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:28:06.520315 kubelet[2569]: I0320 21:28:06.520250 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:28:06.520395 containerd[1457]: time="2025-03-20T21:28:06.520004464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:28:06.961652 systemd[1]: Created slice kubepods-besteffort-pod6295e94d_2f4c_4256_9881_d39ba3ae7732.slice - libcontainer container kubepods-besteffort-pod6295e94d_2f4c_4256_9881_d39ba3ae7732.slice. Mar 20 21:28:06.974034 systemd[1]: Created slice kubepods-burstable-podc30e1406_3266_4873_a4b9_0ea9be09a470.slice - libcontainer container kubepods-burstable-podc30e1406_3266_4873_a4b9_0ea9be09a470.slice. Mar 20 21:28:07.053980 kubelet[2569]: I0320 21:28:07.053948 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6295e94d-2f4c-4256-9881-d39ba3ae7732-xtables-lock\") pod \"kube-proxy-qdbkt\" (UID: \"6295e94d-2f4c-4256-9881-d39ba3ae7732\") " pod="kube-system/kube-proxy-qdbkt" Mar 20 21:28:07.053980 kubelet[2569]: I0320 21:28:07.053981 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrv2\" (UniqueName: \"kubernetes.io/projected/6295e94d-2f4c-4256-9881-d39ba3ae7732-kube-api-access-jsrv2\") pod \"kube-proxy-qdbkt\" (UID: \"6295e94d-2f4c-4256-9881-d39ba3ae7732\") " pod="kube-system/kube-proxy-qdbkt" Mar 20 21:28:07.054113 kubelet[2569]: I0320 21:28:07.054014 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c30e1406-3266-4873-a4b9-0ea9be09a470-clustermesh-secrets\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054161 kubelet[2569]: I0320 21:28:07.054036 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6295e94d-2f4c-4256-9881-d39ba3ae7732-kube-proxy\") pod \"kube-proxy-qdbkt\" (UID: \"6295e94d-2f4c-4256-9881-d39ba3ae7732\") " pod="kube-system/kube-proxy-qdbkt" Mar 20 21:28:07.054195 kubelet[2569]: I0320 21:28:07.054173 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-hubble-tls\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054217 kubelet[2569]: I0320 21:28:07.054194 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6295e94d-2f4c-4256-9881-d39ba3ae7732-lib-modules\") pod \"kube-proxy-qdbkt\" (UID: \"6295e94d-2f4c-4256-9881-d39ba3ae7732\") " pod="kube-system/kube-proxy-qdbkt" Mar 20 21:28:07.054217 kubelet[2569]: I0320 21:28:07.054213 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-bpf-maps\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054261 kubelet[2569]: I0320 21:28:07.054229 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-hostproc\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054261 kubelet[2569]: I0320 21:28:07.054244 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-cgroup\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054298 kubelet[2569]: I0320 21:28:07.054260 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-xtables-lock\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054298 kubelet[2569]: I0320 21:28:07.054275 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-lib-modules\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054298 kubelet[2569]: I0320 21:28:07.054291 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-net\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054409 kubelet[2569]: I0320 21:28:07.054307 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsck\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054409 kubelet[2569]: I0320 21:28:07.054330 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-config-path\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054409 kubelet[2569]: I0320 21:28:07.054393 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-kernel\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054470 kubelet[2569]: I0320 21:28:07.054451 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cni-path\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054496 kubelet[2569]: I0320 21:28:07.054477 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-etc-cni-netd\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.054516 kubelet[2569]: I0320 21:28:07.054503 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-run\") pod \"cilium-nqt2t\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " pod="kube-system/cilium-nqt2t" Mar 20 21:28:07.163093 kubelet[2569]: E0320 21:28:07.162967 2569 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:28:07.163093 kubelet[2569]: E0320 21:28:07.162996 2569 projected.go:194] Error preparing data for projected volume kube-api-access-ncsck for pod kube-system/cilium-nqt2t: configmap "kube-root-ca.crt" not found Mar 20 21:28:07.163093 kubelet[2569]: E0320 21:28:07.163041 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck podName:c30e1406-3266-4873-a4b9-0ea9be09a470 nodeName:}" failed. No retries permitted until 2025-03-20 21:28:07.663024542 +0000 UTC m=+7.612913184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncsck" (UniqueName: "kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck") pod "cilium-nqt2t" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470") : configmap "kube-root-ca.crt" not found Mar 20 21:28:07.165324 kubelet[2569]: E0320 21:28:07.165296 2569 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:28:07.165324 kubelet[2569]: E0320 21:28:07.165323 2569 projected.go:194] Error preparing data for projected volume kube-api-access-jsrv2 for pod kube-system/kube-proxy-qdbkt: configmap "kube-root-ca.crt" not found Mar 20 21:28:07.165414 kubelet[2569]: E0320 21:28:07.165360 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6295e94d-2f4c-4256-9881-d39ba3ae7732-kube-api-access-jsrv2 podName:6295e94d-2f4c-4256-9881-d39ba3ae7732 nodeName:}" failed. No retries permitted until 2025-03-20 21:28:07.665347728 +0000 UTC m=+7.615236370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jsrv2" (UniqueName: "kubernetes.io/projected/6295e94d-2f4c-4256-9881-d39ba3ae7732-kube-api-access-jsrv2") pod "kube-proxy-qdbkt" (UID: "6295e94d-2f4c-4256-9881-d39ba3ae7732") : configmap "kube-root-ca.crt" not found Mar 20 21:28:07.655522 systemd[1]: Created slice kubepods-besteffort-podd98adee7_8cf4_44ed_a509_afc1c63cd127.slice - libcontainer container kubepods-besteffort-podd98adee7_8cf4_44ed_a509_afc1c63cd127.slice. Mar 20 21:28:07.659977 kubelet[2569]: I0320 21:28:07.659029 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lch\" (UniqueName: \"kubernetes.io/projected/d98adee7-8cf4-44ed-a509-afc1c63cd127-kube-api-access-r4lch\") pod \"cilium-operator-5d85765b45-r5gs5\" (UID: \"d98adee7-8cf4-44ed-a509-afc1c63cd127\") " pod="kube-system/cilium-operator-5d85765b45-r5gs5" Mar 20 21:28:07.659977 kubelet[2569]: I0320 21:28:07.659075 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d98adee7-8cf4-44ed-a509-afc1c63cd127-cilium-config-path\") pod \"cilium-operator-5d85765b45-r5gs5\" (UID: \"d98adee7-8cf4-44ed-a509-afc1c63cd127\") " pod="kube-system/cilium-operator-5d85765b45-r5gs5" Mar 20 21:28:07.870917 kubelet[2569]: E0320 21:28:07.870858 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:07.871630 containerd[1457]: time="2025-03-20T21:28:07.871296873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qdbkt,Uid:6295e94d-2f4c-4256-9881-d39ba3ae7732,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:07.876401 kubelet[2569]: E0320 21:28:07.876318 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:07.876992 containerd[1457]: time="2025-03-20T21:28:07.876947928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqt2t,Uid:c30e1406-3266-4873-a4b9-0ea9be09a470,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:07.888258 containerd[1457]: time="2025-03-20T21:28:07.888228273Z" level=info msg="connecting to shim f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9" address="unix:///run/containerd/s/a5424d2bc036200b0348521cf47ed9b38fe088c5a20aa83c4a49f9204f24705a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:07.893281 containerd[1457]: time="2025-03-20T21:28:07.893013514Z" level=info msg="connecting to shim fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:07.914118 systemd[1]: Started cri-containerd-f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9.scope - libcontainer container f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9. Mar 20 21:28:07.917494 systemd[1]: Started cri-containerd-fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00.scope - libcontainer container fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00. Mar 20 21:28:07.940704 containerd[1457]: time="2025-03-20T21:28:07.940608033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qdbkt,Uid:6295e94d-2f4c-4256-9881-d39ba3ae7732,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9\"" Mar 20 21:28:07.941482 kubelet[2569]: E0320 21:28:07.941433 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:07.942264 containerd[1457]: time="2025-03-20T21:28:07.941683689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqt2t,Uid:c30e1406-3266-4873-a4b9-0ea9be09a470,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\"" Mar 20 21:28:07.943659 kubelet[2569]: E0320 21:28:07.943536 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:07.944701 containerd[1457]: time="2025-03-20T21:28:07.944522860Z" level=info msg="CreateContainer within sandbox \"f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:28:07.945461 containerd[1457]: time="2025-03-20T21:28:07.945420760Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 21:28:07.954975 containerd[1457]: time="2025-03-20T21:28:07.954911226Z" level=info msg="Container b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:07.959503 kubelet[2569]: E0320 21:28:07.959480 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:07.960006 containerd[1457]: time="2025-03-20T21:28:07.959961280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r5gs5,Uid:d98adee7-8cf4-44ed-a509-afc1c63cd127,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:07.961827 containerd[1457]: time="2025-03-20T21:28:07.961795209Z" level=info msg="CreateContainer within sandbox \"f9e02c6bf3f56a7feb644290687e1d0be5001e6b1b8aef66919791006a4166e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b\"" Mar 20 21:28:07.962693 containerd[1457]: time="2025-03-20T21:28:07.962522595Z" level=info msg="StartContainer for \"b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b\"" Mar 20 21:28:07.963982 containerd[1457]: time="2025-03-20T21:28:07.963950281Z" level=info msg="connecting to shim b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b" address="unix:///run/containerd/s/a5424d2bc036200b0348521cf47ed9b38fe088c5a20aa83c4a49f9204f24705a" protocol=ttrpc version=3 Mar 20 21:28:07.974754 containerd[1457]: time="2025-03-20T21:28:07.974716684Z" level=info msg="connecting to shim e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7" address="unix:///run/containerd/s/80fe3102d1af6010aa866fa0739f25c55866248deb0d8143b6367b392cd7e998" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:07.984053 systemd[1]: Started cri-containerd-b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b.scope - libcontainer container b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b. Mar 20 21:28:07.998113 systemd[1]: Started cri-containerd-e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7.scope - libcontainer container e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7. Mar 20 21:28:08.015614 containerd[1457]: time="2025-03-20T21:28:08.015529339Z" level=info msg="StartContainer for \"b11166d4b3241dc8080c90b96458520af6a5adb2e4f0f7790ca35db3e4a3cb0b\" returns successfully" Mar 20 21:28:08.037633 containerd[1457]: time="2025-03-20T21:28:08.037585580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r5gs5,Uid:d98adee7-8cf4-44ed-a509-afc1c63cd127,Namespace:kube-system,Attempt:0,} returns sandbox id \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\"" Mar 20 21:28:08.038338 kubelet[2569]: E0320 21:28:08.038315 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.159780 kubelet[2569]: E0320 21:28:08.159738 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.196331 kubelet[2569]: E0320 21:28:08.196176 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.213171 kubelet[2569]: I0320 21:28:08.213108 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qdbkt" podStartSLOduration=2.213092328 podStartE2EDuration="2.213092328s" podCreationTimestamp="2025-03-20 21:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:08.170500896 +0000 UTC m=+8.120389538" watchObservedRunningTime="2025-03-20 21:28:08.213092328 +0000 UTC m=+8.162980970" Mar 20 21:28:09.160316 kubelet[2569]: E0320 21:28:09.160274 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:09.777889 kubelet[2569]: E0320 21:28:09.777858 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:10.162802 kubelet[2569]: E0320 21:28:10.162707 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:12.138495 update_engine[1442]: I20250320 21:28:12.137940 1442 update_attempter.cc:509] Updating boot flags... Mar 20 21:28:12.167915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2954) Mar 20 21:28:12.228934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2957) Mar 20 21:28:12.270967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2957) Mar 20 21:28:12.940222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871625561.mount: Deactivated successfully. Mar 20 21:28:14.219022 containerd[1457]: time="2025-03-20T21:28:14.218963735Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:28:14.220949 containerd[1457]: time="2025-03-20T21:28:14.220661013Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 20 21:28:14.222581 containerd[1457]: time="2025-03-20T21:28:14.222526235Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:28:14.224191 containerd[1457]: time="2025-03-20T21:28:14.224057931Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.278601324s" Mar 20 21:28:14.224191 containerd[1457]: time="2025-03-20T21:28:14.224103817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 20 21:28:14.228955 containerd[1457]: time="2025-03-20T21:28:14.228657537Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 21:28:14.234507 containerd[1457]: time="2025-03-20T21:28:14.234466353Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:28:14.263673 containerd[1457]: time="2025-03-20T21:28:14.263620129Z" level=info msg="Container 8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:14.266635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885369004.mount: Deactivated successfully. Mar 20 21:28:14.269785 containerd[1457]: time="2025-03-20T21:28:14.269749870Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\"" Mar 20 21:28:14.270836 containerd[1457]: time="2025-03-20T21:28:14.270534700Z" level=info msg="StartContainer for \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\"" Mar 20 21:28:14.271322 containerd[1457]: time="2025-03-20T21:28:14.271294647Z" level=info msg="connecting to shim 8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" protocol=ttrpc version=3 Mar 20 21:28:14.322100 systemd[1]: Started cri-containerd-8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619.scope - libcontainer container 8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619. Mar 20 21:28:14.403190 systemd[1]: cri-containerd-8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619.scope: Deactivated successfully. Mar 20 21:28:14.415862 containerd[1457]: time="2025-03-20T21:28:14.415696294Z" level=info msg="StartContainer for \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" returns successfully" Mar 20 21:28:14.425441 kubelet[2569]: E0320 21:28:14.425358 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:14.447185 containerd[1457]: time="2025-03-20T21:28:14.442999810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" id:\"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" pid:3000 exited_at:{seconds:1742506094 nanos:435100580}" Mar 20 21:28:14.447405 containerd[1457]: time="2025-03-20T21:28:14.447365343Z" level=info msg="received exit event container_id:\"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" id:\"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" pid:3000 exited_at:{seconds:1742506094 nanos:435100580}" Mar 20 21:28:14.488385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619-rootfs.mount: Deactivated successfully. Mar 20 21:28:15.209835 kubelet[2569]: E0320 21:28:15.209769 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:15.212529 containerd[1457]: time="2025-03-20T21:28:15.212490357Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:28:15.219744 containerd[1457]: time="2025-03-20T21:28:15.219700523Z" level=info msg="Container dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:15.224458 containerd[1457]: time="2025-03-20T21:28:15.224309060Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\"" Mar 20 21:28:15.226850 containerd[1457]: time="2025-03-20T21:28:15.226742665Z" level=info msg="StartContainer for \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\"" Mar 20 21:28:15.227570 containerd[1457]: time="2025-03-20T21:28:15.227544653Z" level=info msg="connecting to shim dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" protocol=ttrpc version=3 Mar 20 21:28:15.258084 systemd[1]: Started cri-containerd-dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb.scope - libcontainer container dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb. Mar 20 21:28:15.284195 containerd[1457]: time="2025-03-20T21:28:15.284146551Z" level=info msg="StartContainer for \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" returns successfully" Mar 20 21:28:15.302202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:28:15.302422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:28:15.302962 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:28:15.305172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:28:15.306618 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:28:15.307032 systemd[1]: cri-containerd-dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb.scope: Deactivated successfully. Mar 20 21:28:15.319018 containerd[1457]: time="2025-03-20T21:28:15.318765626Z" level=info msg="received exit event container_id:\"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" id:\"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" pid:3047 exited_at:{seconds:1742506095 nanos:318574920}" Mar 20 21:28:15.319137 containerd[1457]: time="2025-03-20T21:28:15.319044983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" id:\"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" pid:3047 exited_at:{seconds:1742506095 nanos:318574920}" Mar 20 21:28:15.335453 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:28:15.345113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb-rootfs.mount: Deactivated successfully. Mar 20 21:28:15.587417 containerd[1457]: time="2025-03-20T21:28:15.587295097Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:28:15.588672 containerd[1457]: time="2025-03-20T21:28:15.588603833Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 20 21:28:15.589188 containerd[1457]: time="2025-03-20T21:28:15.589161667Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:28:15.590624 containerd[1457]: time="2025-03-20T21:28:15.590591939Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.361862232s" Mar 20 21:28:15.590707 containerd[1457]: time="2025-03-20T21:28:15.590628464Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 20 21:28:15.592789 containerd[1457]: time="2025-03-20T21:28:15.592751468Z" level=info msg="CreateContainer within sandbox \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 21:28:15.612943 containerd[1457]: time="2025-03-20T21:28:15.612292444Z" level=info msg="Container 46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:15.618230 containerd[1457]: time="2025-03-20T21:28:15.618187073Z" level=info msg="CreateContainer within sandbox \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\"" Mar 20 21:28:15.618870 containerd[1457]: time="2025-03-20T21:28:15.618840801Z" level=info msg="StartContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\"" Mar 20 21:28:15.619712 containerd[1457]: time="2025-03-20T21:28:15.619687314Z" level=info msg="connecting to shim 46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9" address="unix:///run/containerd/s/80fe3102d1af6010aa866fa0739f25c55866248deb0d8143b6367b392cd7e998" protocol=ttrpc version=3 Mar 20 21:28:15.637148 systemd[1]: Started cri-containerd-46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9.scope - libcontainer container 46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9. Mar 20 21:28:15.699668 containerd[1457]: time="2025-03-20T21:28:15.699589932Z" level=info msg="StartContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" returns successfully" Mar 20 21:28:16.214095 kubelet[2569]: E0320 21:28:16.214057 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:16.219629 kubelet[2569]: E0320 21:28:16.219599 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:16.221593 containerd[1457]: time="2025-03-20T21:28:16.221538766Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:28:16.243202 containerd[1457]: time="2025-03-20T21:28:16.235250797Z" level=info msg="Container c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:16.249012 kubelet[2569]: I0320 21:28:16.248757 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r5gs5" podStartSLOduration=1.697209763 podStartE2EDuration="9.248739999s" podCreationTimestamp="2025-03-20 21:28:07 +0000 UTC" firstStartedPulling="2025-03-20 21:28:08.039796881 +0000 UTC m=+7.989685523" lastFinishedPulling="2025-03-20 21:28:15.591327117 +0000 UTC m=+15.541215759" observedRunningTime="2025-03-20 21:28:16.228398442 +0000 UTC m=+16.178287084" watchObservedRunningTime="2025-03-20 21:28:16.248739999 +0000 UTC m=+16.198628601" Mar 20 21:28:16.252350 containerd[1457]: time="2025-03-20T21:28:16.252184879Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\"" Mar 20 21:28:16.253145 containerd[1457]: time="2025-03-20T21:28:16.253107157Z" level=info msg="StartContainer for \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\"" Mar 20 21:28:16.255175 containerd[1457]: time="2025-03-20T21:28:16.254521057Z" level=info msg="connecting to shim c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" protocol=ttrpc version=3 Mar 20 21:28:16.287202 systemd[1]: Started cri-containerd-c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91.scope - libcontainer container c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91. Mar 20 21:28:16.356711 systemd[1]: cri-containerd-c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91.scope: Deactivated successfully. Mar 20 21:28:16.358189 containerd[1457]: time="2025-03-20T21:28:16.358146049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" id:\"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" pid:3141 exited_at:{seconds:1742506096 nanos:357818248}" Mar 20 21:28:16.383132 containerd[1457]: time="2025-03-20T21:28:16.383086194Z" level=info msg="received exit event container_id:\"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" id:\"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" pid:3141 exited_at:{seconds:1742506096 nanos:357818248}" Mar 20 21:28:16.393300 containerd[1457]: time="2025-03-20T21:28:16.393206126Z" level=info msg="StartContainer for \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" returns successfully" Mar 20 21:28:16.404123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91-rootfs.mount: Deactivated successfully. Mar 20 21:28:17.224097 kubelet[2569]: E0320 21:28:17.224012 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:17.225015 kubelet[2569]: E0320 21:28:17.224825 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:17.226905 containerd[1457]: time="2025-03-20T21:28:17.226856562Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:28:17.242253 containerd[1457]: time="2025-03-20T21:28:17.242208513Z" level=info msg="Container f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:17.246630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744657222.mount: Deactivated successfully. Mar 20 21:28:17.248238 containerd[1457]: time="2025-03-20T21:28:17.248190122Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\"" Mar 20 21:28:17.248691 containerd[1457]: time="2025-03-20T21:28:17.248624975Z" level=info msg="StartContainer for \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\"" Mar 20 21:28:17.249910 containerd[1457]: time="2025-03-20T21:28:17.249857445Z" level=info msg="connecting to shim f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" protocol=ttrpc version=3 Mar 20 21:28:17.276101 systemd[1]: Started cri-containerd-f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9.scope - libcontainer container f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9. Mar 20 21:28:17.296957 systemd[1]: cri-containerd-f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9.scope: Deactivated successfully. Mar 20 21:28:17.297574 containerd[1457]: time="2025-03-20T21:28:17.297493011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" id:\"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" pid:3180 exited_at:{seconds:1742506097 nanos:297168852}" Mar 20 21:28:17.299799 containerd[1457]: time="2025-03-20T21:28:17.299102087Z" level=info msg="received exit event container_id:\"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" id:\"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" pid:3180 exited_at:{seconds:1742506097 nanos:297168852}" Mar 20 21:28:17.305370 containerd[1457]: time="2025-03-20T21:28:17.305336167Z" level=info msg="StartContainer for \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" returns successfully" Mar 20 21:28:17.316188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9-rootfs.mount: Deactivated successfully. Mar 20 21:28:18.229269 kubelet[2569]: E0320 21:28:18.229234 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:18.232065 containerd[1457]: time="2025-03-20T21:28:18.232013159Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:28:18.247340 containerd[1457]: time="2025-03-20T21:28:18.247288018Z" level=info msg="Container 78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:18.256807 containerd[1457]: time="2025-03-20T21:28:18.256715155Z" level=info msg="CreateContainer within sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\"" Mar 20 21:28:18.257506 containerd[1457]: time="2025-03-20T21:28:18.257475324Z" level=info msg="StartContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\"" Mar 20 21:28:18.259135 containerd[1457]: time="2025-03-20T21:28:18.259080551Z" level=info msg="connecting to shim 78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486" address="unix:///run/containerd/s/090342cc713f6b6f73976c99c521aaca4f6102d13f746b6cba58f5bc9445aa65" protocol=ttrpc version=3 Mar 20 21:28:18.279051 systemd[1]: Started cri-containerd-78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486.scope - libcontainer container 78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486. Mar 20 21:28:18.310900 containerd[1457]: time="2025-03-20T21:28:18.310018362Z" level=info msg="StartContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" returns successfully" Mar 20 21:28:18.399530 containerd[1457]: time="2025-03-20T21:28:18.399476579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" id:\"4d81068cea1b2f8f5c739cfd0aff7238988d709a159f32f43cb08c73462d7bd7\" pid:3248 exited_at:{seconds:1742506098 nanos:394397507}" Mar 20 21:28:18.420160 kubelet[2569]: I0320 21:28:18.420113 2569 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 21:28:18.451301 systemd[1]: Created slice kubepods-burstable-pod02ff3ab3_3b61_46a1_8e99_f9de6b5073ea.slice - libcontainer container kubepods-burstable-pod02ff3ab3_3b61_46a1_8e99_f9de6b5073ea.slice. Mar 20 21:28:18.456882 systemd[1]: Created slice kubepods-burstable-podf784cc3e_7940_4569_b1e8_6e5b51aec563.slice - libcontainer container kubepods-burstable-podf784cc3e_7940_4569_b1e8_6e5b51aec563.slice. Mar 20 21:28:18.624689 kubelet[2569]: I0320 21:28:18.624565 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784cc3e-7940-4569-b1e8-6e5b51aec563-config-volume\") pod \"coredns-6f6b679f8f-j8pr4\" (UID: \"f784cc3e-7940-4569-b1e8-6e5b51aec563\") " pod="kube-system/coredns-6f6b679f8f-j8pr4" Mar 20 21:28:18.624689 kubelet[2569]: I0320 21:28:18.624607 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xrt\" (UniqueName: \"kubernetes.io/projected/f784cc3e-7940-4569-b1e8-6e5b51aec563-kube-api-access-c8xrt\") pod \"coredns-6f6b679f8f-j8pr4\" (UID: \"f784cc3e-7940-4569-b1e8-6e5b51aec563\") " pod="kube-system/coredns-6f6b679f8f-j8pr4" Mar 20 21:28:18.624689 kubelet[2569]: I0320 21:28:18.624629 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02ff3ab3-3b61-46a1-8e99-f9de6b5073ea-config-volume\") pod \"coredns-6f6b679f8f-rc2m6\" (UID: \"02ff3ab3-3b61-46a1-8e99-f9de6b5073ea\") " pod="kube-system/coredns-6f6b679f8f-rc2m6" Mar 20 21:28:18.624689 kubelet[2569]: I0320 21:28:18.624646 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5zt\" (UniqueName: \"kubernetes.io/projected/02ff3ab3-3b61-46a1-8e99-f9de6b5073ea-kube-api-access-qg5zt\") pod \"coredns-6f6b679f8f-rc2m6\" (UID: \"02ff3ab3-3b61-46a1-8e99-f9de6b5073ea\") " pod="kube-system/coredns-6f6b679f8f-rc2m6" Mar 20 21:28:18.755558 kubelet[2569]: E0320 21:28:18.755243 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:18.755965 containerd[1457]: time="2025-03-20T21:28:18.755914003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rc2m6,Uid:02ff3ab3-3b61-46a1-8e99-f9de6b5073ea,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:18.760673 kubelet[2569]: E0320 21:28:18.760005 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:18.761667 containerd[1457]: time="2025-03-20T21:28:18.761627269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j8pr4,Uid:f784cc3e-7940-4569-b1e8-6e5b51aec563,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:19.234761 kubelet[2569]: E0320 21:28:19.234727 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:19.248450 kubelet[2569]: I0320 21:28:19.248385 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nqt2t" podStartSLOduration=6.9647676910000005 podStartE2EDuration="13.248370192s" podCreationTimestamp="2025-03-20 21:28:06 +0000 UTC" firstStartedPulling="2025-03-20 21:28:07.944867529 +0000 UTC m=+7.894756171" lastFinishedPulling="2025-03-20 21:28:14.22846999 +0000 UTC m=+14.178358672" observedRunningTime="2025-03-20 21:28:19.247415286 +0000 UTC m=+19.197303928" watchObservedRunningTime="2025-03-20 21:28:19.248370192 +0000 UTC m=+19.198258834" Mar 20 21:28:20.236376 kubelet[2569]: E0320 21:28:20.236315 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:20.499198 systemd-networkd[1401]: cilium_host: Link UP Mar 20 21:28:20.499869 systemd-networkd[1401]: cilium_net: Link UP Mar 20 21:28:20.500364 systemd-networkd[1401]: cilium_net: Gained carrier Mar 20 21:28:20.500675 systemd-networkd[1401]: cilium_host: Gained carrier Mar 20 21:28:20.577952 systemd-networkd[1401]: cilium_vxlan: Link UP Mar 20 21:28:20.577962 systemd-networkd[1401]: cilium_vxlan: Gained carrier Mar 20 21:28:20.868927 kernel: NET: Registered PF_ALG protocol family Mar 20 21:28:20.911033 systemd-networkd[1401]: cilium_net: Gained IPv6LL Mar 20 21:28:21.237550 kubelet[2569]: E0320 21:28:21.237518 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:21.434646 systemd-networkd[1401]: lxc_health: Link UP Mar 20 21:28:21.434932 systemd-networkd[1401]: lxc_health: Gained carrier Mar 20 21:28:21.443759 systemd-networkd[1401]: cilium_host: Gained IPv6LL Mar 20 21:28:21.874946 kernel: eth0: renamed from tmp883c7 Mar 20 21:28:21.892961 kernel: eth0: renamed from tmp21f7d Mar 20 21:28:21.897325 systemd-networkd[1401]: lxc82aaf95873a3: Link UP Mar 20 21:28:21.897583 systemd-networkd[1401]: lxce8a23711bdc9: Link UP Mar 20 21:28:21.897800 systemd-networkd[1401]: lxc82aaf95873a3: Gained carrier Mar 20 21:28:21.897969 systemd-networkd[1401]: lxce8a23711bdc9: Gained carrier Mar 20 21:28:22.207034 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Mar 20 21:28:22.239232 kubelet[2569]: E0320 21:28:22.239196 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:23.039044 systemd-networkd[1401]: lxc_health: Gained IPv6LL Mar 20 21:28:23.104001 systemd-networkd[1401]: lxc82aaf95873a3: Gained IPv6LL Mar 20 21:28:23.171092 systemd-networkd[1401]: lxce8a23711bdc9: Gained IPv6LL Mar 20 21:28:25.466506 containerd[1457]: time="2025-03-20T21:28:25.465997050Z" level=info msg="connecting to shim 21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7" address="unix:///run/containerd/s/b22c90fad4667de1c3fb614446b1060cad5798413a8c91f0610fa8f15673054b" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:25.466997 containerd[1457]: time="2025-03-20T21:28:25.466968935Z" level=info msg="connecting to shim 883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3" address="unix:///run/containerd/s/bad10a63ec72680bf21602c687b44bc6436afa5707677c59b6635ee94b849d92" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:25.491079 systemd[1]: Started cri-containerd-21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7.scope - libcontainer container 21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7. Mar 20 21:28:25.496148 systemd[1]: Started cri-containerd-883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3.scope - libcontainer container 883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3. Mar 20 21:28:25.502931 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:28:25.506364 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:28:25.523318 containerd[1457]: time="2025-03-20T21:28:25.523278580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j8pr4,Uid:f784cc3e-7940-4569-b1e8-6e5b51aec563,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7\"" Mar 20 21:28:25.524362 kubelet[2569]: E0320 21:28:25.524079 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:25.526674 containerd[1457]: time="2025-03-20T21:28:25.526282441Z" level=info msg="CreateContainer within sandbox \"21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:28:25.532475 containerd[1457]: time="2025-03-20T21:28:25.532445736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rc2m6,Uid:02ff3ab3-3b61-46a1-8e99-f9de6b5073ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3\"" Mar 20 21:28:25.533124 kubelet[2569]: E0320 21:28:25.533094 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:25.539806 containerd[1457]: time="2025-03-20T21:28:25.539768291Z" level=info msg="Container 8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:25.540628 containerd[1457]: time="2025-03-20T21:28:25.540580322Z" level=info msg="CreateContainer within sandbox \"883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:28:25.545794 containerd[1457]: time="2025-03-20T21:28:25.545758331Z" level=info msg="CreateContainer within sandbox \"21f7d3a5387780c7301a2e0130c841067090980cc79200813eb09937c7fd31c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f\"" Mar 20 21:28:25.546274 containerd[1457]: time="2025-03-20T21:28:25.546155765Z" level=info msg="StartContainer for \"8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f\"" Mar 20 21:28:25.547344 containerd[1457]: time="2025-03-20T21:28:25.547317986Z" level=info msg="connecting to shim 8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f" address="unix:///run/containerd/s/b22c90fad4667de1c3fb614446b1060cad5798413a8c91f0610fa8f15673054b" protocol=ttrpc version=3 Mar 20 21:28:25.550134 containerd[1457]: time="2025-03-20T21:28:25.550105228Z" level=info msg="Container a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:25.555105 containerd[1457]: time="2025-03-20T21:28:25.555067859Z" level=info msg="CreateContainer within sandbox \"883c725bbe8c40955b4ad5187ca46f1138e416402b7603d861811788fe1dedf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c\"" Mar 20 21:28:25.556954 containerd[1457]: time="2025-03-20T21:28:25.556592471Z" level=info msg="StartContainer for \"a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c\"" Mar 20 21:28:25.557440 containerd[1457]: time="2025-03-20T21:28:25.557411902Z" level=info msg="connecting to shim a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c" address="unix:///run/containerd/s/bad10a63ec72680bf21602c687b44bc6436afa5707677c59b6635ee94b849d92" protocol=ttrpc version=3 Mar 20 21:28:25.568067 systemd[1]: Started cri-containerd-8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f.scope - libcontainer container 8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f. Mar 20 21:28:25.570818 systemd[1]: Started cri-containerd-a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c.scope - libcontainer container a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c. Mar 20 21:28:25.603838 containerd[1457]: time="2025-03-20T21:28:25.603787766Z" level=info msg="StartContainer for \"8eac644ce3bdbe87244ac93e5186c9f41d176bfbb15c5efd515ffa1fbdb4718f\" returns successfully" Mar 20 21:28:25.607874 containerd[1457]: time="2025-03-20T21:28:25.604955947Z" level=info msg="StartContainer for \"a5bf680a1132e54cf3f500d334678cb84da2f145f0599a302d0b4cdbe0045c3c\" returns successfully" Mar 20 21:28:26.253564 kubelet[2569]: E0320 21:28:26.253467 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:26.256954 kubelet[2569]: E0320 21:28:26.256592 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:26.267069 kubelet[2569]: I0320 21:28:26.266812 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rc2m6" podStartSLOduration=19.266792187 podStartE2EDuration="19.266792187s" podCreationTimestamp="2025-03-20 21:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:26.265646331 +0000 UTC m=+26.215535013" watchObservedRunningTime="2025-03-20 21:28:26.266792187 +0000 UTC m=+26.216680829" Mar 20 21:28:26.292068 kubelet[2569]: I0320 21:28:26.291994 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j8pr4" podStartSLOduration=19.29197557 podStartE2EDuration="19.29197557s" podCreationTimestamp="2025-03-20 21:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:26.288434954 +0000 UTC m=+26.238323596" watchObservedRunningTime="2025-03-20 21:28:26.29197557 +0000 UTC m=+26.241864212" Mar 20 21:28:26.440809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377982701.mount: Deactivated successfully. Mar 20 21:28:27.250950 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:44034.service - OpenSSH per-connection server daemon (10.0.0.1:44034). Mar 20 21:28:27.258197 kubelet[2569]: E0320 21:28:27.258108 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:27.258197 kubelet[2569]: E0320 21:28:27.258187 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:27.306059 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 44034 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:27.307533 sshd-session[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:27.312253 systemd-logind[1440]: New session 8 of user core. Mar 20 21:28:27.325103 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:28:27.448311 sshd[3906]: Connection closed by 10.0.0.1 port 44034 Mar 20 21:28:27.448645 sshd-session[3904]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:27.452399 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:44034.service: Deactivated successfully. Mar 20 21:28:27.455154 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:28:27.456124 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:28:27.457016 systemd-logind[1440]: Removed session 8. Mar 20 21:28:28.259735 kubelet[2569]: E0320 21:28:28.259550 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:28.259735 kubelet[2569]: E0320 21:28:28.259733 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:32.464406 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:48182.service - OpenSSH per-connection server daemon (10.0.0.1:48182). Mar 20 21:28:32.517560 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 48182 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:32.518782 sshd-session[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:32.522520 systemd-logind[1440]: New session 9 of user core. Mar 20 21:28:32.533062 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:28:32.644940 sshd[3922]: Connection closed by 10.0.0.1 port 48182 Mar 20 21:28:32.645184 sshd-session[3920]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:32.648388 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:48182.service: Deactivated successfully. Mar 20 21:28:32.650769 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:28:32.652425 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:28:32.653220 systemd-logind[1440]: Removed session 9. Mar 20 21:28:37.042103 kubelet[2569]: I0320 21:28:37.042052 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:28:37.042610 kubelet[2569]: E0320 21:28:37.042513 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:37.274732 kubelet[2569]: E0320 21:28:37.274700 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:37.657110 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:48184.service - OpenSSH per-connection server daemon (10.0.0.1:48184). Mar 20 21:28:37.714573 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 48184 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:37.715718 sshd-session[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:37.719917 systemd-logind[1440]: New session 10 of user core. Mar 20 21:28:37.731051 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:28:37.846180 sshd[3939]: Connection closed by 10.0.0.1 port 48184 Mar 20 21:28:37.846968 sshd-session[3937]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:37.859194 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:48184.service: Deactivated successfully. Mar 20 21:28:37.860619 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:28:37.861262 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:28:37.863012 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:48198.service - OpenSSH per-connection server daemon (10.0.0.1:48198). Mar 20 21:28:37.863714 systemd-logind[1440]: Removed session 10. Mar 20 21:28:37.916422 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 48198 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:37.917868 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:37.922228 systemd-logind[1440]: New session 11 of user core. Mar 20 21:28:37.933043 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:28:38.085536 sshd[3957]: Connection closed by 10.0.0.1 port 48198 Mar 20 21:28:38.086122 sshd-session[3953]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:38.097266 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:48198.service: Deactivated successfully. Mar 20 21:28:38.099087 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:28:38.100272 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:28:38.101638 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:48204.service - OpenSSH per-connection server daemon (10.0.0.1:48204). Mar 20 21:28:38.102393 systemd-logind[1440]: Removed session 11. Mar 20 21:28:38.160695 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 48204 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:38.162017 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:38.168247 systemd-logind[1440]: New session 12 of user core. Mar 20 21:28:38.180041 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:28:38.292530 sshd[3972]: Connection closed by 10.0.0.1 port 48204 Mar 20 21:28:38.292996 sshd-session[3969]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:38.296100 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:48204.service: Deactivated successfully. Mar 20 21:28:38.299377 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:28:38.300757 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:28:38.301689 systemd-logind[1440]: Removed session 12. Mar 20 21:28:43.311180 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:49902.service - OpenSSH per-connection server daemon (10.0.0.1:49902). Mar 20 21:28:43.358370 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 49902 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:43.359629 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:43.363690 systemd-logind[1440]: New session 13 of user core. Mar 20 21:28:43.373095 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:28:43.480335 sshd[3989]: Connection closed by 10.0.0.1 port 49902 Mar 20 21:28:43.480856 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:43.484213 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:49902.service: Deactivated successfully. Mar 20 21:28:43.485683 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:28:43.487370 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:28:43.488233 systemd-logind[1440]: Removed session 13. Mar 20 21:28:48.492009 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:49914.service - OpenSSH per-connection server daemon (10.0.0.1:49914). Mar 20 21:28:48.546403 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 49914 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:48.547638 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:48.551859 systemd-logind[1440]: New session 14 of user core. Mar 20 21:28:48.561038 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:28:48.673347 sshd[4005]: Connection closed by 10.0.0.1 port 49914 Mar 20 21:28:48.673866 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:48.686347 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:49914.service: Deactivated successfully. Mar 20 21:28:48.688180 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:28:48.691327 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:49916.service - OpenSSH per-connection server daemon (10.0.0.1:49916). Mar 20 21:28:48.691570 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:28:48.694237 systemd-logind[1440]: Removed session 14. Mar 20 21:28:48.741455 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 49916 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:48.742644 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:48.746790 systemd-logind[1440]: New session 15 of user core. Mar 20 21:28:48.761195 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:28:48.965709 sshd[4021]: Connection closed by 10.0.0.1 port 49916 Mar 20 21:28:48.967005 sshd-session[4018]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:48.979313 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:49916.service: Deactivated successfully. Mar 20 21:28:48.980770 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:28:48.982211 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:28:48.983631 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:49926.service - OpenSSH per-connection server daemon (10.0.0.1:49926). Mar 20 21:28:48.985382 systemd-logind[1440]: Removed session 15. Mar 20 21:28:49.044903 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:49.046146 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:49.051586 systemd-logind[1440]: New session 16 of user core. Mar 20 21:28:49.059039 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:28:50.359984 sshd[4034]: Connection closed by 10.0.0.1 port 49926 Mar 20 21:28:50.361527 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:50.372743 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:49926.service: Deactivated successfully. Mar 20 21:28:50.376449 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:28:50.377759 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:28:50.384135 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:49942.service - OpenSSH per-connection server daemon (10.0.0.1:49942). Mar 20 21:28:50.385324 systemd-logind[1440]: Removed session 16. Mar 20 21:28:50.440312 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 49942 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:50.441876 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:50.446520 systemd-logind[1440]: New session 17 of user core. Mar 20 21:28:50.455096 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:28:50.675989 sshd[4060]: Connection closed by 10.0.0.1 port 49942 Mar 20 21:28:50.676544 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:50.690325 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:49942.service: Deactivated successfully. Mar 20 21:28:50.692415 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:28:50.693241 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:28:50.695328 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:49956.service - OpenSSH per-connection server daemon (10.0.0.1:49956). Mar 20 21:28:50.696385 systemd-logind[1440]: Removed session 17. Mar 20 21:28:50.754026 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 49956 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:50.755410 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:50.759611 systemd-logind[1440]: New session 18 of user core. Mar 20 21:28:50.767074 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:28:50.885633 sshd[4074]: Connection closed by 10.0.0.1 port 49956 Mar 20 21:28:50.886374 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:50.889776 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:49956.service: Deactivated successfully. Mar 20 21:28:50.892192 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:28:50.893004 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:28:50.893995 systemd-logind[1440]: Removed session 18. Mar 20 21:28:55.895125 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Mar 20 21:28:55.943824 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:28:55.944977 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:55.949624 systemd-logind[1440]: New session 19 of user core. Mar 20 21:28:55.959029 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:28:56.060689 sshd[4093]: Connection closed by 10.0.0.1 port 46352 Mar 20 21:28:56.061047 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:56.064853 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:46352.service: Deactivated successfully. Mar 20 21:28:56.066586 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:28:56.067249 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:28:56.068155 systemd-logind[1440]: Removed session 19. Mar 20 21:29:01.073262 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:46368.service - OpenSSH per-connection server daemon (10.0.0.1:46368). Mar 20 21:29:01.133157 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 46368 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:01.134448 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:01.138716 systemd-logind[1440]: New session 20 of user core. Mar 20 21:29:01.142021 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:29:01.245944 sshd[4110]: Connection closed by 10.0.0.1 port 46368 Mar 20 21:29:01.246129 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:01.248638 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:29:01.249883 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:46368.service: Deactivated successfully. Mar 20 21:29:01.253732 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:29:01.254487 systemd-logind[1440]: Removed session 20. Mar 20 21:29:06.257344 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:49232.service - OpenSSH per-connection server daemon (10.0.0.1:49232). Mar 20 21:29:06.309171 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 49232 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:06.310459 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:06.316441 systemd-logind[1440]: New session 21 of user core. Mar 20 21:29:06.326080 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:29:06.432617 sshd[4125]: Connection closed by 10.0.0.1 port 49232 Mar 20 21:29:06.433138 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:06.447178 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:49232.service: Deactivated successfully. Mar 20 21:29:06.448713 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:29:06.450884 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:29:06.452964 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:49234.service - OpenSSH per-connection server daemon (10.0.0.1:49234). Mar 20 21:29:06.454151 systemd-logind[1440]: Removed session 21. Mar 20 21:29:06.501787 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 49234 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:06.502882 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:06.507085 systemd-logind[1440]: New session 22 of user core. Mar 20 21:29:06.523034 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:29:08.819549 containerd[1457]: time="2025-03-20T21:29:08.819448181Z" level=info msg="StopContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" with timeout 30 (s)" Mar 20 21:29:08.830825 containerd[1457]: time="2025-03-20T21:29:08.830071630Z" level=info msg="Stop container \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" with signal terminated" Mar 20 21:29:08.859662 systemd[1]: cri-containerd-46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9.scope: Deactivated successfully. Mar 20 21:29:08.862837 containerd[1457]: time="2025-03-20T21:29:08.862790418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" id:\"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" pid:3106 exited_at:{seconds:1742506148 nanos:861438697}" Mar 20 21:29:08.863360 containerd[1457]: time="2025-03-20T21:29:08.863329259Z" level=info msg="received exit event container_id:\"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" id:\"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" pid:3106 exited_at:{seconds:1742506148 nanos:861438697}" Mar 20 21:29:08.881990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9-rootfs.mount: Deactivated successfully. Mar 20 21:29:08.888307 containerd[1457]: time="2025-03-20T21:29:08.888197680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" id:\"1cbba57b0afb4d55a001256cfa7ed6e526fa9c5f545f263cdb61c0c64ed4d4f8\" pid:4171 exited_at:{seconds:1742506148 nanos:887926520}" Mar 20 21:29:08.889827 containerd[1457]: time="2025-03-20T21:29:08.889728961Z" level=info msg="StopContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" with timeout 2 (s)" Mar 20 21:29:08.890076 containerd[1457]: time="2025-03-20T21:29:08.890053561Z" level=info msg="Stop container \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" with signal terminated" Mar 20 21:29:08.892670 containerd[1457]: time="2025-03-20T21:29:08.892620924Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:29:08.897870 systemd-networkd[1401]: lxc_health: Link DOWN Mar 20 21:29:08.897875 systemd-networkd[1401]: lxc_health: Lost carrier Mar 20 21:29:08.900195 containerd[1457]: time="2025-03-20T21:29:08.900040130Z" level=info msg="StopContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" returns successfully" Mar 20 21:29:08.900996 containerd[1457]: time="2025-03-20T21:29:08.900720051Z" level=info msg="StopPodSandbox for \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\"" Mar 20 21:29:08.900996 containerd[1457]: time="2025-03-20T21:29:08.900782811Z" level=info msg="Container to stop \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.907627 systemd[1]: cri-containerd-e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7.scope: Deactivated successfully. Mar 20 21:29:08.913523 systemd[1]: cri-containerd-78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486.scope: Deactivated successfully. Mar 20 21:29:08.916421 containerd[1457]: time="2025-03-20T21:29:08.916248224Z" level=info msg="received exit event container_id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" pid:3216 exited_at:{seconds:1742506148 nanos:913823982}" Mar 20 21:29:08.916421 containerd[1457]: time="2025-03-20T21:29:08.916407824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" id:\"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" pid:3216 exited_at:{seconds:1742506148 nanos:913823982}" Mar 20 21:29:08.913866 systemd[1]: cri-containerd-78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486.scope: Consumed 6.398s CPU time, 123.2M memory peak, 152K read from disk, 12.9M written to disk. Mar 20 21:29:08.917743 containerd[1457]: time="2025-03-20T21:29:08.916957704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" id:\"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" pid:2787 exit_status:137 exited_at:{seconds:1742506148 nanos:916535664}" Mar 20 21:29:08.941215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486-rootfs.mount: Deactivated successfully. Mar 20 21:29:08.946157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7-rootfs.mount: Deactivated successfully. Mar 20 21:29:08.948619 containerd[1457]: time="2025-03-20T21:29:08.948586091Z" level=info msg="shim disconnected" id=e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7 namespace=k8s.io Mar 20 21:29:08.948945 containerd[1457]: time="2025-03-20T21:29:08.948676731Z" level=warning msg="cleaning up after shim disconnected" id=e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7 namespace=k8s.io Mar 20 21:29:08.948945 containerd[1457]: time="2025-03-20T21:29:08.948710491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:29:08.951623 containerd[1457]: time="2025-03-20T21:29:08.951506134Z" level=info msg="StopContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" returns successfully" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952022694Z" level=info msg="StopPodSandbox for \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\"" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952074294Z" level=info msg="Container to stop \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952084414Z" level=info msg="Container to stop \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952093574Z" level=info msg="Container to stop \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952101894Z" level=info msg="Container to stop \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.952299 containerd[1457]: time="2025-03-20T21:29:08.952109454Z" level=info msg="Container to stop \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:29:08.958543 systemd[1]: cri-containerd-fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00.scope: Deactivated successfully. Mar 20 21:29:08.966641 containerd[1457]: time="2025-03-20T21:29:08.966588547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" id:\"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" pid:2716 exit_status:137 exited_at:{seconds:1742506148 nanos:966168386}" Mar 20 21:29:08.968192 containerd[1457]: time="2025-03-20T21:29:08.967106747Z" level=info msg="received exit event sandbox_id:\"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" exit_status:137 exited_at:{seconds:1742506148 nanos:916535664}" Mar 20 21:29:08.967539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7-shm.mount: Deactivated successfully. Mar 20 21:29:08.972315 containerd[1457]: time="2025-03-20T21:29:08.972188831Z" level=info msg="TearDown network for sandbox \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" successfully" Mar 20 21:29:08.972315 containerd[1457]: time="2025-03-20T21:29:08.972211391Z" level=info msg="StopPodSandbox for \"e91d36227d3e4627729f2181aa1f99641ced13c7d752fde06267be332e6386d7\" returns successfully" Mar 20 21:29:09.001043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00-rootfs.mount: Deactivated successfully. Mar 20 21:29:09.006430 containerd[1457]: time="2025-03-20T21:29:09.006363745Z" level=info msg="shim disconnected" id=fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00 namespace=k8s.io Mar 20 21:29:09.006430 containerd[1457]: time="2025-03-20T21:29:09.006398906Z" level=warning msg="cleaning up after shim disconnected" id=fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00 namespace=k8s.io Mar 20 21:29:09.006430 containerd[1457]: time="2025-03-20T21:29:09.006428786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:29:09.015495 containerd[1457]: time="2025-03-20T21:29:09.015374042Z" level=info msg="received exit event sandbox_id:\"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" exit_status:137 exited_at:{seconds:1742506148 nanos:966168386}" Mar 20 21:29:09.016028 containerd[1457]: time="2025-03-20T21:29:09.015567002Z" level=info msg="TearDown network for sandbox \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" successfully" Mar 20 21:29:09.016028 containerd[1457]: time="2025-03-20T21:29:09.015669722Z" level=info msg="StopPodSandbox for \"fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00\" returns successfully" Mar 20 21:29:09.113370 kubelet[2569]: I0320 21:29:09.113248 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d98adee7-8cf4-44ed-a509-afc1c63cd127-cilium-config-path\") pod \"d98adee7-8cf4-44ed-a509-afc1c63cd127\" (UID: \"d98adee7-8cf4-44ed-a509-afc1c63cd127\") " Mar 20 21:29:09.113370 kubelet[2569]: I0320 21:29:09.113308 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lch\" (UniqueName: \"kubernetes.io/projected/d98adee7-8cf4-44ed-a509-afc1c63cd127-kube-api-access-r4lch\") pod \"d98adee7-8cf4-44ed-a509-afc1c63cd127\" (UID: \"d98adee7-8cf4-44ed-a509-afc1c63cd127\") " Mar 20 21:29:09.117210 kubelet[2569]: I0320 21:29:09.117165 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d98adee7-8cf4-44ed-a509-afc1c63cd127-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d98adee7-8cf4-44ed-a509-afc1c63cd127" (UID: "d98adee7-8cf4-44ed-a509-afc1c63cd127"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:29:09.121024 kubelet[2569]: I0320 21:29:09.120988 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98adee7-8cf4-44ed-a509-afc1c63cd127-kube-api-access-r4lch" (OuterVolumeSpecName: "kube-api-access-r4lch") pod "d98adee7-8cf4-44ed-a509-afc1c63cd127" (UID: "d98adee7-8cf4-44ed-a509-afc1c63cd127"). InnerVolumeSpecName "kube-api-access-r4lch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:29:09.213997 kubelet[2569]: I0320 21:29:09.213958 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-hostproc\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.213997 kubelet[2569]: I0320 21:29:09.213993 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cni-path\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.213997 kubelet[2569]: I0320 21:29:09.214009 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-etc-cni-netd\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214029 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-cgroup\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214045 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-bpf-maps\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214061 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-lib-modules\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214085 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-hubble-tls\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214099 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-net\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214181 kubelet[2569]: I0320 21:29:09.214118 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-xtables-lock\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214132 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-run\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214149 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-config-path\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214170 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c30e1406-3266-4873-a4b9-0ea9be09a470-clustermesh-secrets\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214187 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncsck\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214202 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-kernel\") pod \"c30e1406-3266-4873-a4b9-0ea9be09a470\" (UID: \"c30e1406-3266-4873-a4b9-0ea9be09a470\") " Mar 20 21:29:09.214311 kubelet[2569]: I0320 21:29:09.214233 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d98adee7-8cf4-44ed-a509-afc1c63cd127-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.214435 kubelet[2569]: I0320 21:29:09.214243 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r4lch\" (UniqueName: \"kubernetes.io/projected/d98adee7-8cf4-44ed-a509-afc1c63cd127-kube-api-access-r4lch\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.214435 kubelet[2569]: I0320 21:29:09.214299 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214435 kubelet[2569]: I0320 21:29:09.214330 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-hostproc" (OuterVolumeSpecName: "hostproc") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214435 kubelet[2569]: I0320 21:29:09.214344 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cni-path" (OuterVolumeSpecName: "cni-path") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214435 kubelet[2569]: I0320 21:29:09.214357 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214663 kubelet[2569]: I0320 21:29:09.214370 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214663 kubelet[2569]: I0320 21:29:09.214384 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214663 kubelet[2569]: I0320 21:29:09.214396 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214663 kubelet[2569]: I0320 21:29:09.214620 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214855 kubelet[2569]: I0320 21:29:09.214669 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.214855 kubelet[2569]: I0320 21:29:09.214689 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:29:09.216872 kubelet[2569]: I0320 21:29:09.216836 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:29:09.216949 kubelet[2569]: I0320 21:29:09.216888 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck" (OuterVolumeSpecName: "kube-api-access-ncsck") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "kube-api-access-ncsck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:29:09.216949 kubelet[2569]: I0320 21:29:09.216930 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c30e1406-3266-4873-a4b9-0ea9be09a470-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:29:09.217096 kubelet[2569]: I0320 21:29:09.217070 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c30e1406-3266-4873-a4b9-0ea9be09a470" (UID: "c30e1406-3266-4873-a4b9-0ea9be09a470"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:29:09.315327 kubelet[2569]: I0320 21:29:09.315290 2569 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c30e1406-3266-4873-a4b9-0ea9be09a470-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315450 kubelet[2569]: I0320 21:29:09.315439 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ncsck\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-kube-api-access-ncsck\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315537 kubelet[2569]: I0320 21:29:09.315526 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315594 kubelet[2569]: I0320 21:29:09.315584 2569 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315657 kubelet[2569]: I0320 21:29:09.315646 2569 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315713 kubelet[2569]: I0320 21:29:09.315703 2569 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315768 kubelet[2569]: I0320 21:29:09.315758 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315826 kubelet[2569]: I0320 21:29:09.315816 2569 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315882 kubelet[2569]: I0320 21:29:09.315872 2569 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.315981 kubelet[2569]: I0320 21:29:09.315969 2569 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c30e1406-3266-4873-a4b9-0ea9be09a470-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.316044 kubelet[2569]: I0320 21:29:09.316033 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.316099 kubelet[2569]: I0320 21:29:09.316088 2569 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.316153 kubelet[2569]: I0320 21:29:09.316143 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.316208 kubelet[2569]: I0320 21:29:09.316199 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30e1406-3266-4873-a4b9-0ea9be09a470-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:29:09.332377 kubelet[2569]: I0320 21:29:09.332336 2569 scope.go:117] "RemoveContainer" containerID="46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9" Mar 20 21:29:09.334208 containerd[1457]: time="2025-03-20T21:29:09.334012367Z" level=info msg="RemoveContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\"" Mar 20 21:29:09.337086 systemd[1]: Removed slice kubepods-besteffort-podd98adee7_8cf4_44ed_a509_afc1c63cd127.slice - libcontainer container kubepods-besteffort-podd98adee7_8cf4_44ed_a509_afc1c63cd127.slice. Mar 20 21:29:09.344870 containerd[1457]: time="2025-03-20T21:29:09.344792507Z" level=info msg="RemoveContainer for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" returns successfully" Mar 20 21:29:09.345346 kubelet[2569]: I0320 21:29:09.345230 2569 scope.go:117] "RemoveContainer" containerID="46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9" Mar 20 21:29:09.345448 systemd[1]: Removed slice kubepods-burstable-podc30e1406_3266_4873_a4b9_0ea9be09a470.slice - libcontainer container kubepods-burstable-podc30e1406_3266_4873_a4b9_0ea9be09a470.slice. Mar 20 21:29:09.345799 containerd[1457]: time="2025-03-20T21:29:09.345582948Z" level=error msg="ContainerStatus for \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\": not found" Mar 20 21:29:09.345642 systemd[1]: kubepods-burstable-podc30e1406_3266_4873_a4b9_0ea9be09a470.slice: Consumed 6.536s CPU time, 123.6M memory peak, 172K read from disk, 12.9M written to disk. Mar 20 21:29:09.355515 kubelet[2569]: E0320 21:29:09.355449 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\": not found" containerID="46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9" Mar 20 21:29:09.355944 kubelet[2569]: I0320 21:29:09.355519 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9"} err="failed to get container status \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"46cad58c5fd7c548fd8ea3404ef2ff48398333577fe55db79cec36643d4e4cb9\": not found" Mar 20 21:29:09.355944 kubelet[2569]: I0320 21:29:09.355604 2569 scope.go:117] "RemoveContainer" containerID="78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486" Mar 20 21:29:09.358635 containerd[1457]: time="2025-03-20T21:29:09.358564411Z" level=info msg="RemoveContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\"" Mar 20 21:29:09.363253 containerd[1457]: time="2025-03-20T21:29:09.363214699Z" level=info msg="RemoveContainer for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" returns successfully" Mar 20 21:29:09.363886 kubelet[2569]: I0320 21:29:09.363410 2569 scope.go:117] "RemoveContainer" containerID="f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9" Mar 20 21:29:09.365858 containerd[1457]: time="2025-03-20T21:29:09.365829184Z" level=info msg="RemoveContainer for \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\"" Mar 20 21:29:09.369465 containerd[1457]: time="2025-03-20T21:29:09.369431910Z" level=info msg="RemoveContainer for \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" returns successfully" Mar 20 21:29:09.369645 kubelet[2569]: I0320 21:29:09.369624 2569 scope.go:117] "RemoveContainer" containerID="c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91" Mar 20 21:29:09.371703 containerd[1457]: time="2025-03-20T21:29:09.371675674Z" level=info msg="RemoveContainer for \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\"" Mar 20 21:29:09.374966 containerd[1457]: time="2025-03-20T21:29:09.374926320Z" level=info msg="RemoveContainer for \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" returns successfully" Mar 20 21:29:09.375181 kubelet[2569]: I0320 21:29:09.375154 2569 scope.go:117] "RemoveContainer" containerID="dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb" Mar 20 21:29:09.376591 containerd[1457]: time="2025-03-20T21:29:09.376558403Z" level=info msg="RemoveContainer for \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\"" Mar 20 21:29:09.379750 containerd[1457]: time="2025-03-20T21:29:09.379330528Z" level=info msg="RemoveContainer for \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" returns successfully" Mar 20 21:29:09.379812 kubelet[2569]: I0320 21:29:09.379469 2569 scope.go:117] "RemoveContainer" containerID="8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619" Mar 20 21:29:09.382767 containerd[1457]: time="2025-03-20T21:29:09.382708814Z" level=info msg="RemoveContainer for \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\"" Mar 20 21:29:09.385247 containerd[1457]: time="2025-03-20T21:29:09.385205818Z" level=info msg="RemoveContainer for \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" returns successfully" Mar 20 21:29:09.385376 kubelet[2569]: I0320 21:29:09.385356 2569 scope.go:117] "RemoveContainer" containerID="78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486" Mar 20 21:29:09.385673 containerd[1457]: time="2025-03-20T21:29:09.385642539Z" level=error msg="ContainerStatus for \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\": not found" Mar 20 21:29:09.385817 kubelet[2569]: E0320 21:29:09.385797 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\": not found" containerID="78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486" Mar 20 21:29:09.385843 kubelet[2569]: I0320 21:29:09.385825 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486"} err="failed to get container status \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\": rpc error: code = NotFound desc = an error occurred when try to find container \"78271655a94d61b70c30f7ab5cd1843b3da0a72a899b92901b8ef872882fa486\": not found" Mar 20 21:29:09.385869 kubelet[2569]: I0320 21:29:09.385846 2569 scope.go:117] "RemoveContainer" containerID="f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9" Mar 20 21:29:09.386053 containerd[1457]: time="2025-03-20T21:29:09.386024220Z" level=error msg="ContainerStatus for \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\": not found" Mar 20 21:29:09.386167 kubelet[2569]: E0320 21:29:09.386148 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\": not found" containerID="f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9" Mar 20 21:29:09.386214 kubelet[2569]: I0320 21:29:09.386196 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9"} err="failed to get container status \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1c2d3d0e9d502be41dbfc60577104f541ae13bf411c2cd527f96b787d20fed9\": not found" Mar 20 21:29:09.386237 kubelet[2569]: I0320 21:29:09.386217 2569 scope.go:117] "RemoveContainer" containerID="c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91" Mar 20 21:29:09.386525 containerd[1457]: time="2025-03-20T21:29:09.386486581Z" level=error msg="ContainerStatus for \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\": not found" Mar 20 21:29:09.386685 kubelet[2569]: E0320 21:29:09.386645 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\": not found" containerID="c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91" Mar 20 21:29:09.386726 kubelet[2569]: I0320 21:29:09.386688 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91"} err="failed to get container status \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\": rpc error: code = NotFound desc = an error occurred when try to find container \"c92f7b69fdafeeb2c09069b823ebb370e11668354950438588c81f640d544a91\": not found" Mar 20 21:29:09.386726 kubelet[2569]: I0320 21:29:09.386706 2569 scope.go:117] "RemoveContainer" containerID="dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb" Mar 20 21:29:09.386895 containerd[1457]: time="2025-03-20T21:29:09.386861061Z" level=error msg="ContainerStatus for \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\": not found" Mar 20 21:29:09.387024 kubelet[2569]: E0320 21:29:09.387008 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\": not found" containerID="dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb" Mar 20 21:29:09.387055 kubelet[2569]: I0320 21:29:09.387029 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb"} err="failed to get container status \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc96a28c4b412d8b978d55b21b204c7aa04a372afd4a57c0081f66c12ad5bdcb\": not found" Mar 20 21:29:09.387055 kubelet[2569]: I0320 21:29:09.387043 2569 scope.go:117] "RemoveContainer" containerID="8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619" Mar 20 21:29:09.387205 containerd[1457]: time="2025-03-20T21:29:09.387179462Z" level=error msg="ContainerStatus for \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\": not found" Mar 20 21:29:09.387302 kubelet[2569]: E0320 21:29:09.387285 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\": not found" containerID="8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619" Mar 20 21:29:09.387327 kubelet[2569]: I0320 21:29:09.387312 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619"} err="failed to get container status \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\": rpc error: code = NotFound desc = an error occurred when try to find container \"8caa8ba4086e860863a82e7ace06f703a50a5e024aa86c9cde4d653e15356619\": not found" Mar 20 21:29:09.880981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fec32abc883056fcf8787ed22cad5fbd34ba3baf9c1c6dd6ebc6c3ef058aee00-shm.mount: Deactivated successfully. Mar 20 21:29:09.881086 systemd[1]: var-lib-kubelet-pods-d98adee7\x2d8cf4\x2d44ed\x2da509\x2dafc1c63cd127-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4lch.mount: Deactivated successfully. Mar 20 21:29:09.881143 systemd[1]: var-lib-kubelet-pods-c30e1406\x2d3266\x2d4873\x2da4b9\x2d0ea9be09a470-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncsck.mount: Deactivated successfully. Mar 20 21:29:09.881206 systemd[1]: var-lib-kubelet-pods-c30e1406\x2d3266\x2d4873\x2da4b9\x2d0ea9be09a470-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 21:29:09.881266 systemd[1]: var-lib-kubelet-pods-c30e1406\x2d3266\x2d4873\x2da4b9\x2d0ea9be09a470-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 21:29:10.149991 kubelet[2569]: I0320 21:29:10.149798 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" path="/var/lib/kubelet/pods/c30e1406-3266-4873-a4b9-0ea9be09a470/volumes" Mar 20 21:29:10.150438 kubelet[2569]: I0320 21:29:10.150396 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98adee7-8cf4-44ed-a509-afc1c63cd127" path="/var/lib/kubelet/pods/d98adee7-8cf4-44ed-a509-afc1c63cd127/volumes" Mar 20 21:29:10.183134 kubelet[2569]: E0320 21:29:10.183066 2569 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:29:10.783236 sshd[4140]: Connection closed by 10.0.0.1 port 49234 Mar 20 21:29:10.783617 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:10.795211 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:49234.service: Deactivated successfully. Mar 20 21:29:10.797055 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:29:10.797267 systemd[1]: session-22.scope: Consumed 1.641s CPU time, 27M memory peak. Mar 20 21:29:10.798444 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:29:10.799734 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:49244.service - OpenSSH per-connection server daemon (10.0.0.1:49244). Mar 20 21:29:10.801552 systemd-logind[1440]: Removed session 22. Mar 20 21:29:10.849650 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 49244 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:10.850854 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:10.855330 systemd-logind[1440]: New session 23 of user core. Mar 20 21:29:10.863028 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:29:11.135983 kubelet[2569]: E0320 21:29:11.133435 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:11.667356 kubelet[2569]: I0320 21:29:11.667306 2569 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T21:29:11Z","lastTransitionTime":"2025-03-20T21:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 21:29:11.851471 sshd[4303]: Connection closed by 10.0.0.1 port 49244 Mar 20 21:29:11.852026 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:11.867482 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:49244.service: Deactivated successfully. Mar 20 21:29:11.869380 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:29:11.872623 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.872878 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="cilium-agent" Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.873872 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="mount-cgroup" Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.873908 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="apply-sysctl-overwrites" Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.873916 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="clean-cilium-state" Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.873923 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d98adee7-8cf4-44ed-a509-afc1c63cd127" containerName="cilium-operator" Mar 20 21:29:11.874592 kubelet[2569]: E0320 21:29:11.873929 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="mount-bpf-fs" Mar 20 21:29:11.874592 kubelet[2569]: I0320 21:29:11.873960 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30e1406-3266-4873-a4b9-0ea9be09a470" containerName="cilium-agent" Mar 20 21:29:11.874592 kubelet[2569]: I0320 21:29:11.873968 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98adee7-8cf4-44ed-a509-afc1c63cd127" containerName="cilium-operator" Mar 20 21:29:11.877565 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:49252.service - OpenSSH per-connection server daemon (10.0.0.1:49252). Mar 20 21:29:11.881530 systemd-logind[1440]: Removed session 23. Mar 20 21:29:11.897476 systemd[1]: Created slice kubepods-burstable-podaffc6fe7_83c7_4bff_b02a_45e76aeba324.slice - libcontainer container kubepods-burstable-podaffc6fe7_83c7_4bff_b02a_45e76aeba324.slice. Mar 20 21:29:11.931322 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 49252 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:11.932763 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:11.936357 systemd-logind[1440]: New session 24 of user core. Mar 20 21:29:11.948096 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 21:29:11.996865 sshd[4317]: Connection closed by 10.0.0.1 port 49252 Mar 20 21:29:11.997600 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:12.011110 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:49252.service: Deactivated successfully. Mar 20 21:29:12.012504 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 21:29:12.013195 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Mar 20 21:29:12.014936 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:49254.service - OpenSSH per-connection server daemon (10.0.0.1:49254). Mar 20 21:29:12.015651 systemd-logind[1440]: Removed session 24. Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031606 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-etc-cni-netd\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031642 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-bpf-maps\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031662 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-cilium-cgroup\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031679 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-host-proc-sys-kernel\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031695 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-cilium-run\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032047 kubelet[2569]: I0320 21:29:12.031708 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-hostproc\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031722 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/affc6fe7-83c7-4bff-b02a-45e76aeba324-clustermesh-secrets\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031739 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-cni-path\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031753 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/affc6fe7-83c7-4bff-b02a-45e76aeba324-cilium-config-path\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031768 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-lib-modules\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031783 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-host-proc-sys-net\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032228 kubelet[2569]: I0320 21:29:12.031814 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/affc6fe7-83c7-4bff-b02a-45e76aeba324-cilium-ipsec-secrets\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032343 kubelet[2569]: I0320 21:29:12.031843 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/affc6fe7-83c7-4bff-b02a-45e76aeba324-hubble-tls\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032343 kubelet[2569]: I0320 21:29:12.031861 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwtm\" (UniqueName: \"kubernetes.io/projected/affc6fe7-83c7-4bff-b02a-45e76aeba324-kube-api-access-4pwtm\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.032343 kubelet[2569]: I0320 21:29:12.031876 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/affc6fe7-83c7-4bff-b02a-45e76aeba324-xtables-lock\") pod \"cilium-bsk22\" (UID: \"affc6fe7-83c7-4bff-b02a-45e76aeba324\") " pod="kube-system/cilium-bsk22" Mar 20 21:29:12.069306 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 49254 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:29:12.070382 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:29:12.074475 systemd-logind[1440]: New session 25 of user core. Mar 20 21:29:12.081059 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 21:29:12.201468 kubelet[2569]: E0320 21:29:12.201350 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:12.201857 containerd[1457]: time="2025-03-20T21:29:12.201822525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsk22,Uid:affc6fe7-83c7-4bff-b02a-45e76aeba324,Namespace:kube-system,Attempt:0,}" Mar 20 21:29:12.214609 containerd[1457]: time="2025-03-20T21:29:12.214512541Z" level=info msg="connecting to shim ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:29:12.240100 systemd[1]: Started cri-containerd-ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1.scope - libcontainer container ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1. Mar 20 21:29:12.259792 containerd[1457]: time="2025-03-20T21:29:12.259750299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsk22,Uid:affc6fe7-83c7-4bff-b02a-45e76aeba324,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\"" Mar 20 21:29:12.260649 kubelet[2569]: E0320 21:29:12.260605 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:12.265566 containerd[1457]: time="2025-03-20T21:29:12.265524444Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:29:12.272767 containerd[1457]: time="2025-03-20T21:29:12.272720076Z" level=info msg="Container 8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:29:12.277808 containerd[1457]: time="2025-03-20T21:29:12.277765298Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\"" Mar 20 21:29:12.278497 containerd[1457]: time="2025-03-20T21:29:12.278459621Z" level=info msg="StartContainer for \"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\"" Mar 20 21:29:12.279642 containerd[1457]: time="2025-03-20T21:29:12.279581466Z" level=info msg="connecting to shim 8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" protocol=ttrpc version=3 Mar 20 21:29:12.302108 systemd[1]: Started cri-containerd-8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550.scope - libcontainer container 8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550. Mar 20 21:29:12.325932 containerd[1457]: time="2025-03-20T21:29:12.325534347Z" level=info msg="StartContainer for \"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\" returns successfully" Mar 20 21:29:12.334993 systemd[1]: cri-containerd-8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550.scope: Deactivated successfully. Mar 20 21:29:12.336653 containerd[1457]: time="2025-03-20T21:29:12.336551275Z" level=info msg="received exit event container_id:\"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\" id:\"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\" pid:4396 exited_at:{seconds:1742506152 nanos:336274554}" Mar 20 21:29:12.336870 containerd[1457]: time="2025-03-20T21:29:12.336835517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\" id:\"8b4012112e0f7e9190c5eb9867175c02d02e20ccee82e5f36ed7dfd299f2a550\" pid:4396 exited_at:{seconds:1742506152 nanos:336274554}" Mar 20 21:29:12.350105 kubelet[2569]: E0320 21:29:12.350075 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:13.354605 kubelet[2569]: E0320 21:29:13.353797 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:13.357933 containerd[1457]: time="2025-03-20T21:29:13.357870961Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:29:13.370817 containerd[1457]: time="2025-03-20T21:29:13.370751868Z" level=info msg="Container 59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:29:13.381163 containerd[1457]: time="2025-03-20T21:29:13.381121202Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\"" Mar 20 21:29:13.381628 containerd[1457]: time="2025-03-20T21:29:13.381602964Z" level=info msg="StartContainer for \"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\"" Mar 20 21:29:13.382477 containerd[1457]: time="2025-03-20T21:29:13.382439329Z" level=info msg="connecting to shim 59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" protocol=ttrpc version=3 Mar 20 21:29:13.401100 systemd[1]: Started cri-containerd-59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d.scope - libcontainer container 59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d. Mar 20 21:29:13.430634 containerd[1457]: time="2025-03-20T21:29:13.430596739Z" level=info msg="StartContainer for \"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\" returns successfully" Mar 20 21:29:13.432860 systemd[1]: cri-containerd-59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d.scope: Deactivated successfully. Mar 20 21:29:13.433834 containerd[1457]: time="2025-03-20T21:29:13.433699875Z" level=info msg="received exit event container_id:\"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\" id:\"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\" pid:4443 exited_at:{seconds:1742506153 nanos:433448474}" Mar 20 21:29:13.433834 containerd[1457]: time="2025-03-20T21:29:13.433805355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\" id:\"59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d\" pid:4443 exited_at:{seconds:1742506153 nanos:433448474}" Mar 20 21:29:13.450730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e97a668c66ecc08a87f744faa9f65ef887f721f89ffacf0844a18a78642b4d-rootfs.mount: Deactivated successfully. Mar 20 21:29:14.357269 kubelet[2569]: E0320 21:29:14.357235 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:14.359159 containerd[1457]: time="2025-03-20T21:29:14.359105406Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:29:14.381630 containerd[1457]: time="2025-03-20T21:29:14.381575261Z" level=info msg="Container 649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:29:14.382888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965386732.mount: Deactivated successfully. Mar 20 21:29:14.393298 containerd[1457]: time="2025-03-20T21:29:14.393256611Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\"" Mar 20 21:29:14.393962 containerd[1457]: time="2025-03-20T21:29:14.393931575Z" level=info msg="StartContainer for \"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\"" Mar 20 21:29:14.395366 containerd[1457]: time="2025-03-20T21:29:14.395332583Z" level=info msg="connecting to shim 649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" protocol=ttrpc version=3 Mar 20 21:29:14.414074 systemd[1]: Started cri-containerd-649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd.scope - libcontainer container 649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd. Mar 20 21:29:14.443422 containerd[1457]: time="2025-03-20T21:29:14.443324990Z" level=info msg="StartContainer for \"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\" returns successfully" Mar 20 21:29:14.443423 systemd[1]: cri-containerd-649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd.scope: Deactivated successfully. Mar 20 21:29:14.445336 containerd[1457]: time="2025-03-20T21:29:14.445221122Z" level=info msg="received exit event container_id:\"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\" id:\"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\" pid:4487 exited_at:{seconds:1742506154 nanos:444965800}" Mar 20 21:29:14.445336 containerd[1457]: time="2025-03-20T21:29:14.445297802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\" id:\"649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd\" pid:4487 exited_at:{seconds:1742506154 nanos:444965800}" Mar 20 21:29:14.463339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-649a1c4b46979614e28224387e49c401567d90e98d6cfb6824b081fd629b6acd-rootfs.mount: Deactivated successfully. Mar 20 21:29:15.183999 kubelet[2569]: E0320 21:29:15.183960 2569 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:29:15.362153 kubelet[2569]: E0320 21:29:15.362126 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:15.365652 containerd[1457]: time="2025-03-20T21:29:15.365014505Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:29:15.373331 containerd[1457]: time="2025-03-20T21:29:15.372913118Z" level=info msg="Container 8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:29:15.376112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978082805.mount: Deactivated successfully. Mar 20 21:29:15.380638 containerd[1457]: time="2025-03-20T21:29:15.380601850Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\"" Mar 20 21:29:15.381275 containerd[1457]: time="2025-03-20T21:29:15.381238974Z" level=info msg="StartContainer for \"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\"" Mar 20 21:29:15.382307 containerd[1457]: time="2025-03-20T21:29:15.382261501Z" level=info msg="connecting to shim 8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" protocol=ttrpc version=3 Mar 20 21:29:15.401061 systemd[1]: Started cri-containerd-8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30.scope - libcontainer container 8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30. Mar 20 21:29:15.421881 systemd[1]: cri-containerd-8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30.scope: Deactivated successfully. Mar 20 21:29:15.422805 containerd[1457]: time="2025-03-20T21:29:15.422767295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\" id:\"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\" pid:4526 exited_at:{seconds:1742506155 nanos:422532773}" Mar 20 21:29:15.423607 containerd[1457]: time="2025-03-20T21:29:15.423582340Z" level=info msg="received exit event container_id:\"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\" id:\"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\" pid:4526 exited_at:{seconds:1742506155 nanos:422532773}" Mar 20 21:29:15.429523 containerd[1457]: time="2025-03-20T21:29:15.429492620Z" level=info msg="StartContainer for \"8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30\" returns successfully" Mar 20 21:29:15.439358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aabe438e7c204b7a94f15bda163501d523c51ddbfd94c6788b2d1180ecfff30-rootfs.mount: Deactivated successfully. Mar 20 21:29:16.367332 kubelet[2569]: E0320 21:29:16.367298 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:16.370437 containerd[1457]: time="2025-03-20T21:29:16.370401036Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:29:16.378164 containerd[1457]: time="2025-03-20T21:29:16.377400489Z" level=info msg="Container d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:29:16.385304 containerd[1457]: time="2025-03-20T21:29:16.385270148Z" level=info msg="CreateContainer within sandbox \"ce071b1619de70f15a2440de158cfb7518f8cae76ab6c38168206deb8a8ccfb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\"" Mar 20 21:29:16.385954 containerd[1457]: time="2025-03-20T21:29:16.385934673Z" level=info msg="StartContainer for \"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\"" Mar 20 21:29:16.386930 containerd[1457]: time="2025-03-20T21:29:16.386888120Z" level=info msg="connecting to shim d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3" address="unix:///run/containerd/s/c9f77441adefdd2cb4e65da54303f0c6fa4d30b5ab27c9421974f2058a2b5eb2" protocol=ttrpc version=3 Mar 20 21:29:16.411044 systemd[1]: Started cri-containerd-d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3.scope - libcontainer container d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3. Mar 20 21:29:16.444534 containerd[1457]: time="2025-03-20T21:29:16.444482711Z" level=info msg="StartContainer for \"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" returns successfully" Mar 20 21:29:16.492233 containerd[1457]: time="2025-03-20T21:29:16.492195469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" id:\"a5c0051d6a69190d8a5271d02e47b6525b61e2df481066001da7ecfb80973fc0\" pid:4594 exited_at:{seconds:1742506156 nanos:491177421}" Mar 20 21:29:16.693935 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 20 21:29:17.372857 kubelet[2569]: E0320 21:29:17.372815 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:17.395468 kubelet[2569]: I0320 21:29:17.395397 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bsk22" podStartSLOduration=6.395379116 podStartE2EDuration="6.395379116s" podCreationTimestamp="2025-03-20 21:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:29:17.392429772 +0000 UTC m=+77.342318414" watchObservedRunningTime="2025-03-20 21:29:17.395379116 +0000 UTC m=+77.345267758" Mar 20 21:29:18.379208 kubelet[2569]: E0320 21:29:18.379173 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:18.433295 containerd[1457]: time="2025-03-20T21:29:18.433166253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" id:\"aeb70e12d25a94d62e1053cd9163d65fb54cf2714d087047bc767f5908ea5018\" pid:4761 exit_status:1 exited_at:{seconds:1742506158 nanos:432782770}" Mar 20 21:29:19.586828 systemd-networkd[1401]: lxc_health: Link UP Mar 20 21:29:19.589393 systemd-networkd[1401]: lxc_health: Gained carrier Mar 20 21:29:20.203004 kubelet[2569]: E0320 21:29:20.202964 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:20.382857 kubelet[2569]: E0320 21:29:20.382829 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:20.558669 containerd[1457]: time="2025-03-20T21:29:20.558585791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" id:\"9431f2e47f511c38c12fdac65459ced1232cd14408e7aac5a1c5f67258523d83\" pid:5128 exited_at:{seconds:1742506160 nanos:557309297}" Mar 20 21:29:21.133488 kubelet[2569]: E0320 21:29:21.133390 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:21.384670 kubelet[2569]: E0320 21:29:21.384400 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:29:21.535121 systemd-networkd[1401]: lxc_health: Gained IPv6LL Mar 20 21:29:22.675255 containerd[1457]: time="2025-03-20T21:29:22.675073713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" id:\"0c3fe8744bac5fcd0e17da7309b9a47c3d906f0d138966bd1eab33a1ee610063\" pid:5163 exited_at:{seconds:1742506162 nanos:674570827}" Mar 20 21:29:24.771669 containerd[1457]: time="2025-03-20T21:29:24.771628902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01c6c3b413a2d26807465d45372d0fefba3925f57b8a96233680b8badff8da3\" id:\"3baf5b236d8feb1ea7d3ade7adba8f0832fdc6a5e485cc9a99f2e2838383e441\" pid:5194 exited_at:{seconds:1742506164 nanos:771193776}" Mar 20 21:29:24.777032 sshd[4326]: Connection closed by 10.0.0.1 port 49254 Mar 20 21:29:24.776960 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Mar 20 21:29:24.780222 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:49254.service: Deactivated successfully. Mar 20 21:29:24.782019 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 21:29:24.782696 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Mar 20 21:29:24.783467 systemd-logind[1440]: Removed session 25.