Mar 17 17:29:42.912554 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:29:42.912581 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:29:42.912592 kernel: KASLR enabled Mar 17 17:29:42.912599 kernel: efi: EFI v2.7 by EDK II Mar 17 17:29:42.912605 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Mar 17 17:29:42.912610 kernel: random: crng init done Mar 17 17:29:42.912618 kernel: secureboot: Secure boot disabled Mar 17 17:29:42.912624 kernel: ACPI: Early table checksum verification disabled Mar 17 17:29:42.912630 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:29:42.912638 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:29:42.912644 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912651 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912657 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912663 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912671 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912678 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912685 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912691 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912698 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:29:42.912704 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:29:42.912711 kernel: NUMA: Failed to initialise from firmware Mar 17 17:29:42.912717 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:29:42.912724 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:29:42.912730 kernel: Zone ranges: Mar 17 17:29:42.912737 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:29:42.912744 kernel: DMA32 empty Mar 17 17:29:42.912751 kernel: Normal empty Mar 17 17:29:42.912757 kernel: Movable zone start for each node Mar 17 17:29:42.912764 kernel: Early memory node ranges Mar 17 17:29:42.912770 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Mar 17 17:29:42.912776 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:29:42.912783 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:29:42.912789 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:29:42.912796 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:29:42.912802 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:29:42.912809 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:29:42.912815 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:29:42.912823 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:29:42.912829 kernel: psci: probing for conduit method from ACPI. Mar 17 17:29:42.912836 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:29:42.912845 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:29:42.912852 kernel: psci: Trusted OS migration not required Mar 17 17:29:42.912859 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:29:42.912867 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:29:42.912874 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:29:42.912881 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:29:42.912888 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:29:42.912895 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:29:42.912902 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:29:42.912909 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:29:42.912916 kernel: CPU features: detected: Spectre-v4 Mar 17 17:29:42.912923 kernel: CPU features: detected: Spectre-BHB Mar 17 17:29:42.912930 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:29:42.912938 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:29:42.912945 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:29:42.912952 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:29:42.912959 kernel: alternatives: applying boot alternatives Mar 17 17:29:42.912966 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:29:42.912974 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:29:42.912981 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:29:42.912988 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:29:42.912994 kernel: Fallback order for Node 0: 0 Mar 17 17:29:42.913001 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:29:42.913008 kernel: Policy zone: DMA Mar 17 17:29:42.913016 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:29:42.913023 kernel: software IO TLB: area num 4. Mar 17 17:29:42.913030 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:29:42.913037 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) Mar 17 17:29:42.913044 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:29:42.913051 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:29:42.913059 kernel: rcu: RCU event tracing is enabled. Mar 17 17:29:42.913066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:29:42.913073 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:29:42.913080 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:29:42.913087 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:29:42.913094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:29:42.913102 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:29:42.913109 kernel: GICv3: 256 SPIs implemented Mar 17 17:29:42.913116 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:29:42.913122 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:29:42.913135 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:29:42.913143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:29:42.913150 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:29:42.913157 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:29:42.913164 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:29:42.913171 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:29:42.913178 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:29:42.913186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:29:42.913194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:29:42.913201 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:29:42.913208 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:29:42.913215 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:29:42.913222 kernel: arm-pv: using stolen time PV Mar 17 17:29:42.913229 kernel: Console: colour dummy device 80x25 Mar 17 17:29:42.913236 kernel: ACPI: Core revision 20230628 Mar 17 17:29:42.913244 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:29:42.913251 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:29:42.913259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:29:42.913266 kernel: landlock: Up and running. Mar 17 17:29:42.913273 kernel: SELinux: Initializing. Mar 17 17:29:42.913280 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:29:42.913288 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:29:42.913295 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:29:42.913303 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:29:42.913310 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:29:42.913317 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:29:42.913325 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:29:42.913332 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:29:42.913339 kernel: Remapping and enabling EFI services. Mar 17 17:29:42.913346 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:29:42.913354 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:29:42.913361 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:29:42.913368 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:29:42.913375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:29:42.913382 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:29:42.913392 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:29:42.913401 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:29:42.913408 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:29:42.913420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:29:42.913429 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:29:42.913436 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:29:42.913444 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:29:42.913451 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:29:42.913459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:29:42.913466 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:29:42.913475 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:29:42.913483 kernel: SMP: Total of 4 processors activated. Mar 17 17:29:42.913490 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:29:42.913498 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:29:42.913506 kernel: CPU features: detected: Common not Private translations Mar 17 17:29:42.913513 kernel: CPU features: detected: CRC32 instructions Mar 17 17:29:42.913521 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:29:42.913528 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:29:42.913552 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:29:42.913560 kernel: CPU features: detected: Privileged Access Never Mar 17 17:29:42.913568 kernel: CPU features: detected: RAS Extension Support Mar 17 17:29:42.913576 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:29:42.913584 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:29:42.913591 kernel: alternatives: applying system-wide alternatives Mar 17 17:29:42.913598 kernel: devtmpfs: initialized Mar 17 17:29:42.913606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:29:42.913614 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:29:42.913623 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:29:42.913630 kernel: SMBIOS 3.0.0 present. Mar 17 17:29:42.913638 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:29:42.913645 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:29:42.913653 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:29:42.913661 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:29:42.913668 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:29:42.913676 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:29:42.913683 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:29:42.913692 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:29:42.913700 kernel: cpuidle: using governor menu Mar 17 17:29:42.913707 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:29:42.913715 kernel: ASID allocator initialised with 32768 entries Mar 17 17:29:42.913722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:29:42.913729 kernel: Serial: AMBA PL011 UART driver Mar 17 17:29:42.913737 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:29:42.913744 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:29:42.913752 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:29:42.913760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:29:42.913768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:29:42.913776 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:29:42.913783 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:29:42.913790 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:29:42.913798 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:29:42.913806 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:29:42.913813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:29:42.913820 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:29:42.913829 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:29:42.913837 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:29:42.913844 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:29:42.913851 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:29:42.913859 kernel: ACPI: Interpreter enabled Mar 17 17:29:42.913866 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:29:42.913874 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:29:42.913881 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:29:42.913889 kernel: printk: console [ttyAMA0] enabled Mar 17 17:29:42.913896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:29:42.914030 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:29:42.914105 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:29:42.914184 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:29:42.914250 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:29:42.914316 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:29:42.914339 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:29:42.914350 kernel: PCI host bridge to bus 0000:00 Mar 17 17:29:42.914426 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:29:42.914487 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:29:42.914567 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:29:42.914653 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:29:42.914736 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:29:42.914817 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:29:42.914888 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:29:42.914955 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:29:42.915021 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:29:42.915087 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:29:42.915163 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:29:42.915236 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:29:42.915296 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:29:42.915358 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:29:42.915419 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:29:42.915429 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:29:42.915437 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:29:42.915445 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:29:42.915453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:29:42.915460 kernel: iommu: Default domain type: Translated Mar 17 17:29:42.915468 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:29:42.915477 kernel: efivars: Registered efivars operations Mar 17 17:29:42.915485 kernel: vgaarb: loaded Mar 17 17:29:42.915492 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:29:42.915500 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:29:42.915507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:29:42.915515 kernel: pnp: PnP ACPI init Mar 17 17:29:42.915609 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:29:42.915621 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:29:42.915631 kernel: NET: Registered PF_INET protocol family Mar 17 17:29:42.915639 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:29:42.915647 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:29:42.915655 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:29:42.915662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:29:42.915670 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:29:42.915678 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:29:42.915685 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:29:42.915693 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:29:42.915702 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:29:42.915710 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:29:42.915717 kernel: kvm [1]: HYP mode not available Mar 17 17:29:42.915725 kernel: Initialise system trusted keyrings Mar 17 17:29:42.915732 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:29:42.915740 kernel: Key type asymmetric registered Mar 17 17:29:42.915747 kernel: Asymmetric key parser 'x509' registered Mar 17 17:29:42.915755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:29:42.915762 kernel: io scheduler mq-deadline registered Mar 17 17:29:42.915771 kernel: io scheduler kyber registered Mar 17 17:29:42.915778 kernel: io scheduler bfq registered Mar 17 17:29:42.915800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:29:42.915808 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:29:42.915816 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:29:42.915887 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:29:42.915897 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:29:42.915905 kernel: thunder_xcv, ver 1.0 Mar 17 17:29:42.915913 kernel: thunder_bgx, ver 1.0 Mar 17 17:29:42.915922 kernel: nicpf, ver 1.0 Mar 17 17:29:42.915930 kernel: nicvf, ver 1.0 Mar 17 17:29:42.916006 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:29:42.916071 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:29:42 UTC (1742232582) Mar 17 17:29:42.916081 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:29:42.916089 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:29:42.916096 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:29:42.916104 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:29:42.916114 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:29:42.916122 kernel: Segment Routing with IPv6 Mar 17 17:29:42.916137 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:29:42.916144 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:29:42.916152 kernel: Key type dns_resolver registered Mar 17 17:29:42.916160 kernel: registered taskstats version 1 Mar 17 17:29:42.916167 kernel: Loading compiled-in X.509 certificates Mar 17 17:29:42.916175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:29:42.916182 kernel: Key type .fscrypt registered Mar 17 17:29:42.916190 kernel: Key type fscrypt-provisioning registered Mar 17 17:29:42.916199 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:29:42.916207 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:29:42.916214 kernel: ima: No architecture policies found Mar 17 17:29:42.916222 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:29:42.916229 kernel: clk: Disabling unused clocks Mar 17 17:29:42.916237 kernel: Freeing unused kernel memory: 39744K Mar 17 17:29:42.916244 kernel: Run /init as init process Mar 17 17:29:42.916252 kernel: with arguments: Mar 17 17:29:42.916260 kernel: /init Mar 17 17:29:42.916268 kernel: with environment: Mar 17 17:29:42.916275 kernel: HOME=/ Mar 17 17:29:42.916283 kernel: TERM=linux Mar 17 17:29:42.916290 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:29:42.916299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:29:42.916309 systemd[1]: Detected virtualization kvm. Mar 17 17:29:42.916317 systemd[1]: Detected architecture arm64. Mar 17 17:29:42.916326 systemd[1]: Running in initrd. Mar 17 17:29:42.916334 systemd[1]: No hostname configured, using default hostname. Mar 17 17:29:42.916342 systemd[1]: Hostname set to . Mar 17 17:29:42.916350 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:29:42.916358 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:29:42.916367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:29:42.916375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:29:42.916383 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:29:42.916393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:29:42.916401 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:29:42.916410 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:29:42.916419 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:29:42.916428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:29:42.916436 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:29:42.916444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:29:42.916453 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:29:42.916461 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:29:42.916470 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:29:42.916478 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:29:42.916486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:29:42.916494 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:29:42.916502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:29:42.916510 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:29:42.916520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:29:42.916528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:29:42.916536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:29:42.916633 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:29:42.916641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:29:42.916650 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:29:42.916658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:29:42.916666 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:29:42.916675 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:29:42.916686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:29:42.916694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:29:42.916703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:29:42.916711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:29:42.916719 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:29:42.916728 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:29:42.916738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:29:42.916768 systemd-journald[238]: Collecting audit messages is disabled. Mar 17 17:29:42.916790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:29:42.916799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:29:42.916808 systemd-journald[238]: Journal started Mar 17 17:29:42.916827 systemd-journald[238]: Runtime Journal (/run/log/journal/71326a39fc584f8d8e2a1f4c3e63e8bd) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:29:42.911291 systemd-modules-load[239]: Inserted module 'overlay' Mar 17 17:29:42.921722 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:29:42.924297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:29:42.926631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:29:42.927837 kernel: Bridge firewalling registered Mar 17 17:29:42.926856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:29:42.927210 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 17 17:29:42.928868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:29:42.932687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:29:42.937336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:29:42.938868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:29:42.943768 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:29:42.959863 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:29:42.960991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:29:42.964428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:29:42.972114 dracut-cmdline[272]: dracut-dracut-053 Mar 17 17:29:42.974488 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:29:42.991273 systemd-resolved[275]: Positive Trust Anchors: Mar 17 17:29:42.991344 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:29:42.991373 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:29:42.995942 systemd-resolved[275]: Defaulting to hostname 'linux'. Mar 17 17:29:42.996874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:29:43.000797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:29:43.045568 kernel: SCSI subsystem initialized Mar 17 17:29:43.049559 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:29:43.057572 kernel: iscsi: registered transport (tcp) Mar 17 17:29:43.069566 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:29:43.069586 kernel: QLogic iSCSI HBA Driver Mar 17 17:29:43.112234 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:29:43.124765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:29:43.142554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:29:43.142623 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:29:43.142641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:29:43.190586 kernel: raid6: neonx8 gen() 15776 MB/s Mar 17 17:29:43.207566 kernel: raid6: neonx4 gen() 15613 MB/s Mar 17 17:29:43.224574 kernel: raid6: neonx2 gen() 13331 MB/s Mar 17 17:29:43.241582 kernel: raid6: neonx1 gen() 10472 MB/s Mar 17 17:29:43.258561 kernel: raid6: int64x8 gen() 6956 MB/s Mar 17 17:29:43.275573 kernel: raid6: int64x4 gen() 7346 MB/s Mar 17 17:29:43.292566 kernel: raid6: int64x2 gen() 6128 MB/s Mar 17 17:29:43.309678 kernel: raid6: int64x1 gen() 5056 MB/s Mar 17 17:29:43.309707 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Mar 17 17:29:43.327667 kernel: raid6: .... xor() 11927 MB/s, rmw enabled Mar 17 17:29:43.327708 kernel: raid6: using neon recovery algorithm Mar 17 17:29:43.332565 kernel: xor: measuring software checksum speed Mar 17 17:29:43.332593 kernel: 8regs : 17605 MB/sec Mar 17 17:29:43.333723 kernel: 32regs : 19052 MB/sec Mar 17 17:29:43.335002 kernel: arm64_neon : 26963 MB/sec Mar 17 17:29:43.335030 kernel: xor: using function: arm64_neon (26963 MB/sec) Mar 17 17:29:43.386573 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:29:43.397219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:29:43.414700 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:29:43.426725 systemd-udevd[458]: Using default interface naming scheme 'v255'. Mar 17 17:29:43.429870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:29:43.444809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:29:43.457054 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Mar 17 17:29:43.484032 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:29:43.492766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:29:43.530798 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:29:43.540949 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:29:43.553759 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:29:43.555403 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:29:43.557390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:29:43.559637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:29:43.570701 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:29:43.575559 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:29:43.583666 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:29:43.583779 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:29:43.583791 kernel: GPT:9289727 != 19775487 Mar 17 17:29:43.583801 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:29:43.583810 kernel: GPT:9289727 != 19775487 Mar 17 17:29:43.583823 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:29:43.583832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:29:43.583460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:29:43.583602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:29:43.588583 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:29:43.589980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:29:43.590137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:29:43.592780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:29:43.604563 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Mar 17 17:29:43.604601 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (515) Mar 17 17:29:43.604843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:29:43.607631 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:29:43.614914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:29:43.624854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:29:43.634488 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:29:43.639206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:29:43.643149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:29:43.644434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:29:43.659726 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:29:43.661618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:29:43.668388 disk-uuid[548]: Primary Header is updated. Mar 17 17:29:43.668388 disk-uuid[548]: Secondary Entries is updated. Mar 17 17:29:43.668388 disk-uuid[548]: Secondary Header is updated. Mar 17 17:29:43.678564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:29:43.684250 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:29:44.690564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:29:44.692532 disk-uuid[551]: The operation has completed successfully. Mar 17 17:29:44.711416 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:29:44.711513 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:29:44.736746 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:29:44.739534 sh[571]: Success Mar 17 17:29:44.753632 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:29:44.783862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:29:44.794934 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:29:44.796965 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:29:44.808186 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:29:44.808236 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:29:44.808247 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:29:44.810161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:29:44.810196 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:29:44.818492 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:29:44.820028 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:29:44.828706 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:29:44.830311 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:29:44.839768 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:29:44.839821 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:29:44.839840 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:29:44.843777 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:29:44.851876 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:29:44.854569 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:29:44.860190 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:29:44.866719 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:29:44.934143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:29:44.943734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:29:44.959726 ignition[664]: Ignition 2.20.0 Mar 17 17:29:44.959736 ignition[664]: Stage: fetch-offline Mar 17 17:29:44.959783 ignition[664]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:44.959792 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:44.960000 ignition[664]: parsed url from cmdline: "" Mar 17 17:29:44.960003 ignition[664]: no config URL provided Mar 17 17:29:44.960008 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:29:44.960015 ignition[664]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:29:44.960042 ignition[664]: op(1): [started] loading QEMU firmware config module Mar 17 17:29:44.960046 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:29:44.970807 systemd-networkd[768]: lo: Link UP Mar 17 17:29:44.968622 ignition[664]: op(1): [finished] loading QEMU firmware config module Mar 17 17:29:44.970811 systemd-networkd[768]: lo: Gained carrier Mar 17 17:29:44.971729 systemd-networkd[768]: Enumeration completed Mar 17 17:29:44.971966 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:29:44.972192 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:29:44.972195 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:29:44.973123 systemd-networkd[768]: eth0: Link UP Mar 17 17:29:44.973128 systemd-networkd[768]: eth0: Gained carrier Mar 17 17:29:44.973135 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:29:44.973926 systemd[1]: Reached target network.target - Network. Mar 17 17:29:45.000602 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:29:45.024548 ignition[664]: parsing config with SHA512: 558db8e4da7649bf05775a65a3b3cbbfc9d7f9231697583485ca0e0a2e964fe3cea6922c641f50ed3e57f2b28994941ab75c365d954d2c1faf7711ace80cfed9 Mar 17 17:29:45.030092 unknown[664]: fetched base config from "system" Mar 17 17:29:45.030105 unknown[664]: fetched user config from "qemu" Mar 17 17:29:45.030777 ignition[664]: fetch-offline: fetch-offline passed Mar 17 17:29:45.030907 ignition[664]: Ignition finished successfully Mar 17 17:29:45.034602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:29:45.035948 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:29:45.050718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:29:45.064312 ignition[774]: Ignition 2.20.0 Mar 17 17:29:45.064325 ignition[774]: Stage: kargs Mar 17 17:29:45.065147 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:45.065162 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:45.066699 ignition[774]: kargs: kargs passed Mar 17 17:29:45.069150 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:29:45.066791 ignition[774]: Ignition finished successfully Mar 17 17:29:45.082728 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:29:45.092725 ignition[782]: Ignition 2.20.0 Mar 17 17:29:45.092736 ignition[782]: Stage: disks Mar 17 17:29:45.092890 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:45.092900 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:45.093787 ignition[782]: disks: disks passed Mar 17 17:29:45.093832 ignition[782]: Ignition finished successfully Mar 17 17:29:45.097630 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:29:45.099971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:29:45.101685 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:29:45.104221 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:29:45.106272 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:29:45.108147 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:29:45.123776 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:29:45.136283 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:29:45.140463 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:29:45.152712 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:29:45.205564 kernel: EXT4-fs (vda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:29:45.205880 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:29:45.207281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:29:45.220664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:29:45.222653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:29:45.223644 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:29:45.223687 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:29:45.223710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:29:45.230673 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:29:45.233590 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Mar 17 17:29:45.233765 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:29:45.238779 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:29:45.238807 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:29:45.238818 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:29:45.240554 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:29:45.241519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:29:45.288728 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:29:45.293039 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:29:45.296625 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:29:45.300518 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:29:45.366362 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:29:45.377705 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:29:45.379835 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:29:45.383579 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:29:45.402634 ignition[913]: INFO : Ignition 2.20.0 Mar 17 17:29:45.403571 ignition[913]: INFO : Stage: mount Mar 17 17:29:45.403571 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:45.403571 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:45.402835 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:29:45.410655 ignition[913]: INFO : mount: mount passed Mar 17 17:29:45.410655 ignition[913]: INFO : Ignition finished successfully Mar 17 17:29:45.406965 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:29:45.418658 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:29:45.806270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:29:45.818711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:29:45.825468 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Mar 17 17:29:45.825515 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:29:45.825526 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:29:45.827100 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:29:45.829562 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:29:45.830361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:29:45.853128 ignition[945]: INFO : Ignition 2.20.0 Mar 17 17:29:45.853128 ignition[945]: INFO : Stage: files Mar 17 17:29:45.854814 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:45.854814 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:45.854814 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:29:45.858268 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:29:45.858268 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:29:45.861640 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:29:45.861640 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:29:45.861640 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:29:45.860574 unknown[945]: wrote ssh authorized keys file for user: core Mar 17 17:29:45.867187 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:29:45.867187 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:29:45.909387 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:29:46.185681 systemd-networkd[768]: eth0: Gained IPv6LL Mar 17 17:29:46.193062 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:29:46.194955 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:29:46.194955 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:29:46.376880 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:29:46.571600 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:29:46.571600 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:29:46.575032 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 17:29:46.813836 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:29:47.024312 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:29:47.024312 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:29:47.028427 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:29:47.055242 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:29:47.059190 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:29:47.061603 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:29:47.061603 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:29:47.061603 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:29:47.061603 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:29:47.061603 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:29:47.061603 ignition[945]: INFO : files: files passed Mar 17 17:29:47.061603 ignition[945]: INFO : Ignition finished successfully Mar 17 17:29:47.062768 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:29:47.075773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:29:47.079577 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:29:47.080893 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:29:47.080997 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:29:47.088177 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:29:47.092232 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:29:47.092232 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:29:47.095965 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:29:47.097241 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:29:47.099083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:29:47.108758 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:29:47.139859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:29:47.141054 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:29:47.142691 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:29:47.144553 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:29:47.146567 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:29:47.147452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:29:47.164234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:29:47.177784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:29:47.186240 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:29:47.187552 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:29:47.189747 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:29:47.191655 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:29:47.191779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:29:47.194514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:29:47.196704 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:29:47.198496 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:29:47.200295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:29:47.202323 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:29:47.204402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:29:47.206397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:29:47.208528 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:29:47.210592 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:29:47.212483 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:29:47.214060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:29:47.214217 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:29:47.216526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:29:47.218503 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:29:47.220513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:29:47.222577 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:29:47.223880 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:29:47.223999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:29:47.226970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:29:47.227095 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:29:47.229207 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:29:47.230812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:29:47.230925 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:29:47.232990 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:29:47.236669 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:29:47.238530 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:29:47.238636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:29:47.240770 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:29:47.240852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:29:47.242481 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:29:47.242602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:29:47.244369 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:29:47.244478 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:29:47.252810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:29:47.254728 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:29:47.256899 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:29:47.257046 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:29:47.259288 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:29:47.259383 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:29:47.266632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:29:47.267116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:29:47.271643 ignition[999]: INFO : Ignition 2.20.0 Mar 17 17:29:47.271643 ignition[999]: INFO : Stage: umount Mar 17 17:29:47.271643 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:29:47.271643 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:29:47.271643 ignition[999]: INFO : umount: umount passed Mar 17 17:29:47.271643 ignition[999]: INFO : Ignition finished successfully Mar 17 17:29:47.267206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:29:47.271528 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:29:47.271658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:29:47.273325 systemd[1]: Stopped target network.target - Network. Mar 17 17:29:47.274511 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:29:47.274639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:29:47.276572 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:29:47.276620 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:29:47.278366 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:29:47.278411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:29:47.280409 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:29:47.280457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:29:47.282301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:29:47.284424 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:29:47.289574 systemd-networkd[768]: eth0: DHCPv6 lease lost Mar 17 17:29:47.291757 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:29:47.292007 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:29:47.293486 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:29:47.293636 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:29:47.296335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:29:47.296381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:29:47.311710 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:29:47.312721 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:29:47.312798 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:29:47.314820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:29:47.314869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:29:47.316744 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:29:47.316790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:29:47.318882 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:29:47.318928 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:29:47.321029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:29:47.329194 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:29:47.329291 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:29:47.331521 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:29:47.331610 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:29:47.346715 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:29:47.346847 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:29:47.349075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:29:47.349127 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:29:47.350929 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:29:47.350964 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:29:47.352752 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:29:47.352800 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:29:47.355449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:29:47.355495 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:29:47.358323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:29:47.358372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:29:47.372732 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:29:47.373816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:29:47.373879 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:29:47.376074 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:29:47.376130 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:29:47.378080 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:29:47.378139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:29:47.380310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:29:47.380353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:29:47.382597 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:29:47.383625 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:29:47.385741 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:29:47.385826 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:29:47.388337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:29:47.390350 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:29:47.400062 systemd[1]: Switching root. Mar 17 17:29:47.429731 systemd-journald[238]: Journal stopped Mar 17 17:29:48.153244 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Mar 17 17:29:48.153296 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:29:48.153308 kernel: SELinux: policy capability open_perms=1 Mar 17 17:29:48.153320 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:29:48.153329 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:29:48.153338 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:29:48.153348 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:29:48.153358 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:29:48.153367 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:29:48.153380 kernel: audit: type=1403 audit(1742232587.585:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:29:48.153390 systemd[1]: Successfully loaded SELinux policy in 32.317ms. Mar 17 17:29:48.153410 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.163ms. Mar 17 17:29:48.153424 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:29:48.153435 systemd[1]: Detected virtualization kvm. Mar 17 17:29:48.153447 systemd[1]: Detected architecture arm64. Mar 17 17:29:48.153457 systemd[1]: Detected first boot. Mar 17 17:29:48.153467 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:29:48.153478 zram_generator::config[1044]: No configuration found. Mar 17 17:29:48.153488 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:29:48.153499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:29:48.153512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:29:48.153523 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:29:48.153534 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:29:48.153601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:29:48.153615 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:29:48.153625 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:29:48.153636 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:29:48.153646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:29:48.153657 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:29:48.154375 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:29:48.154402 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:29:48.154414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:29:48.154424 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:29:48.154435 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:29:48.154446 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:29:48.154457 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:29:48.154468 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:29:48.154483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:29:48.154493 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:29:48.154503 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:29:48.154515 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:29:48.154525 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:29:48.154535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:29:48.154703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:29:48.154777 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:29:48.154803 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:29:48.154815 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:29:48.154860 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:29:48.154876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:29:48.154887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:29:48.154897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:29:48.154908 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:29:48.154918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:29:48.154963 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:29:48.154980 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:29:48.154991 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:29:48.155001 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:29:48.155012 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:29:48.155023 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:29:48.155034 systemd[1]: Reached target machines.target - Containers. Mar 17 17:29:48.155044 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:29:48.155054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:29:48.155065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:29:48.155077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:29:48.155094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:29:48.155106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:29:48.155117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:29:48.155127 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:29:48.155137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:29:48.155148 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:29:48.155158 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:29:48.155171 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:29:48.155181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:29:48.155192 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:29:48.155202 kernel: fuse: init (API version 7.39) Mar 17 17:29:48.155212 kernel: ACPI: bus type drm_connector registered Mar 17 17:29:48.155221 kernel: loop: module loaded Mar 17 17:29:48.155231 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:29:48.155241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:29:48.155255 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:29:48.155267 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:29:48.155307 systemd-journald[1118]: Collecting audit messages is disabled. Mar 17 17:29:48.155333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:29:48.155349 systemd-journald[1118]: Journal started Mar 17 17:29:48.155370 systemd-journald[1118]: Runtime Journal (/run/log/journal/71326a39fc584f8d8e2a1f4c3e63e8bd) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:29:47.942922 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:29:48.155893 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:29:47.957961 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:29:47.958339 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:29:48.156554 systemd[1]: Stopped verity-setup.service. Mar 17 17:29:48.161294 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:29:48.161985 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:29:48.163227 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:29:48.164588 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:29:48.165674 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:29:48.166894 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:29:48.168134 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:29:48.169350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:29:48.170840 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:29:48.173869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:29:48.174027 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:29:48.175528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:29:48.175717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:29:48.177061 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:29:48.177216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:29:48.178688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:29:48.178816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:29:48.180316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:29:48.180466 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:29:48.181826 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:29:48.181964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:29:48.183508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:29:48.184945 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:29:48.186619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:29:48.199220 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:29:48.207693 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:29:48.209921 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:29:48.211053 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:29:48.211102 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:29:48.213189 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:29:48.215471 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:29:48.217735 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:29:48.218937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:29:48.220526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:29:48.222625 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:29:48.223881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:29:48.227734 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:29:48.229299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:29:48.230809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:29:48.233168 systemd-journald[1118]: Time spent on flushing to /var/log/journal/71326a39fc584f8d8e2a1f4c3e63e8bd is 19.249ms for 859 entries. Mar 17 17:29:48.233168 systemd-journald[1118]: System Journal (/var/log/journal/71326a39fc584f8d8e2a1f4c3e63e8bd) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:29:48.260394 systemd-journald[1118]: Received client request to flush runtime journal. Mar 17 17:29:48.260430 kernel: loop0: detected capacity change from 0 to 113536 Mar 17 17:29:48.235734 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:29:48.239755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:29:48.242279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:29:48.243900 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:29:48.245203 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:29:48.246797 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:29:48.248422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:29:48.254813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:29:48.259726 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:29:48.262417 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:29:48.266120 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:29:48.273828 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:29:48.280770 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:29:48.290181 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:29:48.290918 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:29:48.293428 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Mar 17 17:29:48.293447 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Mar 17 17:29:48.294902 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:29:48.297973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:29:48.304683 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 17:29:48.308763 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:29:48.339977 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:29:48.342812 kernel: loop2: detected capacity change from 0 to 116808 Mar 17 17:29:48.349729 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:29:48.363024 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:29:48.363046 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:29:48.367004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:29:48.378598 kernel: loop3: detected capacity change from 0 to 113536 Mar 17 17:29:48.383817 kernel: loop4: detected capacity change from 0 to 189592 Mar 17 17:29:48.390570 kernel: loop5: detected capacity change from 0 to 116808 Mar 17 17:29:48.399890 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:29:48.400325 (sd-merge)[1185]: Merged extensions into '/usr'. Mar 17 17:29:48.405698 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:29:48.405712 systemd[1]: Reloading... Mar 17 17:29:48.480505 zram_generator::config[1214]: No configuration found. Mar 17 17:29:48.508960 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:29:48.561959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:29:48.597162 systemd[1]: Reloading finished in 191 ms. Mar 17 17:29:48.632225 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:29:48.633740 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:29:48.651742 systemd[1]: Starting ensure-sysext.service... Mar 17 17:29:48.653879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:29:48.667897 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:29:48.667918 systemd[1]: Reloading... Mar 17 17:29:48.679211 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:29:48.679461 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:29:48.680091 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:29:48.680319 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 17 17:29:48.680370 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 17 17:29:48.682462 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:29:48.682476 systemd-tmpfiles[1246]: Skipping /boot Mar 17 17:29:48.689655 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:29:48.689664 systemd-tmpfiles[1246]: Skipping /boot Mar 17 17:29:48.718566 zram_generator::config[1273]: No configuration found. Mar 17 17:29:48.793393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:29:48.828572 systemd[1]: Reloading finished in 160 ms. Mar 17 17:29:48.843606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:29:48.855968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:29:48.864096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:29:48.866671 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:29:48.868979 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:29:48.872822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:29:48.875885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:29:48.880801 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:29:48.886468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:29:48.887640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:29:48.890807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:29:48.893859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:29:48.895198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:29:48.898990 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:29:48.902552 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:29:48.905245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:29:48.905399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:29:48.907160 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:29:48.908286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:29:48.916130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:29:48.916983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:29:48.921493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:29:48.924686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:29:48.940973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:29:48.944051 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Mar 17 17:29:48.946140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:29:48.950949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:29:48.952104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:29:48.955877 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:29:48.958323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:29:48.959905 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:29:48.962874 augenrules[1351]: No rules Mar 17 17:29:48.963768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:29:48.964921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:29:48.966951 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:29:48.967150 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:29:48.968892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:29:48.969031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:29:48.971009 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:29:48.971194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:29:48.972968 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:29:48.976140 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:29:48.977827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:29:48.995599 systemd[1]: Finished ensure-sysext.service. Mar 17 17:29:49.012042 systemd-resolved[1313]: Positive Trust Anchors: Mar 17 17:29:49.012732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:29:49.013808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:29:49.014381 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:29:49.014461 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:29:49.014858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:29:49.021141 systemd-resolved[1313]: Defaulting to hostname 'linux'. Mar 17 17:29:49.021355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:29:49.023352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:29:49.027739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:29:49.029025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:29:49.033744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:29:49.039059 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:29:49.040268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:29:49.040617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:29:49.042342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:29:49.042503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:29:49.043947 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:29:49.044087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:29:49.045964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:29:49.046102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:29:49.047022 augenrules[1383]: /sbin/augenrules: No change Mar 17 17:29:49.048003 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:29:49.048146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:29:49.054971 augenrules[1414]: No rules Mar 17 17:29:49.055052 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:29:49.055106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:29:49.056972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:29:49.057033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:29:49.060861 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:29:49.062860 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:29:49.069671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1373) Mar 17 17:29:49.110790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:29:49.119358 systemd-networkd[1397]: lo: Link UP Mar 17 17:29:49.119371 systemd-networkd[1397]: lo: Gained carrier Mar 17 17:29:49.120635 systemd-networkd[1397]: Enumeration completed Mar 17 17:29:49.120855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:29:49.122728 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:29:49.124267 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:29:49.125858 systemd[1]: Reached target network.target - Network. Mar 17 17:29:49.126788 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:29:49.127950 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:29:49.127959 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:29:49.129339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:29:49.131647 systemd-networkd[1397]: eth0: Link UP Mar 17 17:29:49.131659 systemd-networkd[1397]: eth0: Gained carrier Mar 17 17:29:49.131675 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:29:49.145198 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:29:49.150668 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:29:49.151630 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 17:29:49.152236 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:29:49.152281 systemd-timesyncd[1402]: Initial clock synchronization to Mon 2025-03-17 17:29:49.047550 UTC. Mar 17 17:29:49.159879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:29:49.168886 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:29:49.171894 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:29:49.205910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:29:49.214255 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:29:49.251255 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:29:49.252794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:29:49.253935 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:29:49.255112 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:29:49.256378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:29:49.257823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:29:49.259042 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:29:49.260453 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:29:49.261739 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:29:49.261776 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:29:49.262762 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:29:49.264633 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:29:49.267061 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:29:49.277589 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:29:49.279841 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:29:49.281443 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:29:49.282710 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:29:49.283757 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:29:49.284775 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:29:49.284817 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:29:49.285774 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:29:49.287843 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:29:49.287922 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:29:49.291261 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:29:49.294373 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:29:49.297041 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:29:49.298769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:29:49.299587 jq[1444]: false Mar 17 17:29:49.302813 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:29:49.304952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:29:49.310836 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:29:49.312990 extend-filesystems[1445]: Found loop3 Mar 17 17:29:49.312990 extend-filesystems[1445]: Found loop4 Mar 17 17:29:49.312990 extend-filesystems[1445]: Found loop5 Mar 17 17:29:49.312990 extend-filesystems[1445]: Found vda Mar 17 17:29:49.312990 extend-filesystems[1445]: Found vda1 Mar 17 17:29:49.312990 extend-filesystems[1445]: Found vda2 Mar 17 17:29:49.321919 extend-filesystems[1445]: Found vda3 Mar 17 17:29:49.321919 extend-filesystems[1445]: Found usr Mar 17 17:29:49.321919 extend-filesystems[1445]: Found vda4 Mar 17 17:29:49.321919 extend-filesystems[1445]: Found vda6 Mar 17 17:29:49.321919 extend-filesystems[1445]: Found vda7 Mar 17 17:29:49.321919 extend-filesystems[1445]: Found vda9 Mar 17 17:29:49.321919 extend-filesystems[1445]: Checking size of /dev/vda9 Mar 17 17:29:49.318322 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:29:49.314627 dbus-daemon[1443]: [system] SELinux support is enabled Mar 17 17:29:49.325121 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:29:49.325679 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:29:49.326374 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:29:49.328980 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:29:49.330727 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:29:49.333797 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:29:49.335892 extend-filesystems[1445]: Resized partition /dev/vda9 Mar 17 17:29:49.338156 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:29:49.339097 jq[1461]: true Mar 17 17:29:49.339321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:29:49.339641 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:29:49.339782 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:29:49.340361 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:29:49.343201 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:29:49.344649 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:29:49.353350 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:29:49.363463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1364) Mar 17 17:29:49.363117 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:29:49.363165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:29:49.364966 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:29:49.364986 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:29:49.368432 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:29:49.376807 jq[1469]: true Mar 17 17:29:49.383102 tar[1468]: linux-arm64/helm Mar 17 17:29:49.390877 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:29:49.393822 systemd-logind[1454]: New seat seat0. Mar 17 17:29:49.395951 update_engine[1460]: I20250317 17:29:49.395792 1460 main.cc:92] Flatcar Update Engine starting Mar 17 17:29:49.396301 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:29:49.401786 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:29:49.402764 update_engine[1460]: I20250317 17:29:49.402328 1460 update_check_scheduler.cc:74] Next update check in 8m49s Mar 17 17:29:49.406566 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:29:49.413960 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:29:49.419622 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:29:49.419622 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:29:49.419622 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:29:49.429388 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Mar 17 17:29:49.419822 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:29:49.419985 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:29:49.450876 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:29:49.452489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:29:49.456480 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:29:49.465685 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:29:49.583009 containerd[1470]: time="2025-03-17T17:29:49.582864160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:29:49.613172 containerd[1470]: time="2025-03-17T17:29:49.613110640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.614669 containerd[1470]: time="2025-03-17T17:29:49.614632040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:29:49.614669 containerd[1470]: time="2025-03-17T17:29:49.614665320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:29:49.614748 containerd[1470]: time="2025-03-17T17:29:49.614682440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:29:49.614867 containerd[1470]: time="2025-03-17T17:29:49.614844160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:29:49.614901 containerd[1470]: time="2025-03-17T17:29:49.614870280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.614951 containerd[1470]: time="2025-03-17T17:29:49.614931200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:29:49.614951 containerd[1470]: time="2025-03-17T17:29:49.614947720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615139 containerd[1470]: time="2025-03-17T17:29:49.615117520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615139 containerd[1470]: time="2025-03-17T17:29:49.615137400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615190 containerd[1470]: time="2025-03-17T17:29:49.615151200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615190 containerd[1470]: time="2025-03-17T17:29:49.615161120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615260 containerd[1470]: time="2025-03-17T17:29:49.615241800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615453 containerd[1470]: time="2025-03-17T17:29:49.615432840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615572 containerd[1470]: time="2025-03-17T17:29:49.615534960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:29:49.615601 containerd[1470]: time="2025-03-17T17:29:49.615570960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:29:49.615667 containerd[1470]: time="2025-03-17T17:29:49.615648080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:29:49.615712 containerd[1470]: time="2025-03-17T17:29:49.615696560Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:29:49.619035 containerd[1470]: time="2025-03-17T17:29:49.619004360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:29:49.619122 containerd[1470]: time="2025-03-17T17:29:49.619056440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:29:49.619122 containerd[1470]: time="2025-03-17T17:29:49.619078840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:29:49.619122 containerd[1470]: time="2025-03-17T17:29:49.619096360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:29:49.619122 containerd[1470]: time="2025-03-17T17:29:49.619120240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619253960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619485920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619606760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619624040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619637720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619651320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619664040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619675880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619689160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619702960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619715240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619727520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619738320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:29:49.619785 containerd[1470]: time="2025-03-17T17:29:49.619758960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619773080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619785040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619797120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619814680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619827600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619838760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619850480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619863720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619878640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619890800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619903800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619915800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619930960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619951360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620084 containerd[1470]: time="2025-03-17T17:29:49.619963520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.619974280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620166560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620185120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620196280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620208280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620217720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620229240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620239400Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:29:49.620451 containerd[1470]: time="2025-03-17T17:29:49.620249920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:29:49.622150 containerd[1470]: time="2025-03-17T17:29:49.620927200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:29:49.622150 containerd[1470]: time="2025-03-17T17:29:49.620994320Z" level=info msg="Connect containerd service" Mar 17 17:29:49.622150 containerd[1470]: time="2025-03-17T17:29:49.621178080Z" level=info msg="using legacy CRI server" Mar 17 17:29:49.622150 containerd[1470]: time="2025-03-17T17:29:49.621206960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:29:49.622150 containerd[1470]: time="2025-03-17T17:29:49.621484880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:29:49.623712 containerd[1470]: time="2025-03-17T17:29:49.623674080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:29:49.624192 containerd[1470]: time="2025-03-17T17:29:49.623983280Z" level=info msg="Start subscribing containerd event" Mar 17 17:29:49.624192 containerd[1470]: time="2025-03-17T17:29:49.624036600Z" level=info msg="Start recovering state" Mar 17 17:29:49.624192 containerd[1470]: time="2025-03-17T17:29:49.624110480Z" level=info msg="Start event monitor" Mar 17 17:29:49.624192 containerd[1470]: time="2025-03-17T17:29:49.624123480Z" level=info msg="Start snapshots syncer" Mar 17 17:29:49.624314 containerd[1470]: time="2025-03-17T17:29:49.624133320Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:29:49.624749 containerd[1470]: time="2025-03-17T17:29:49.624402360Z" level=info msg="Start streaming server" Mar 17 17:29:49.625767 containerd[1470]: time="2025-03-17T17:29:49.625739480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:29:49.627514 containerd[1470]: time="2025-03-17T17:29:49.625789160Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:29:49.627514 containerd[1470]: time="2025-03-17T17:29:49.625836760Z" level=info msg="containerd successfully booted in 0.043849s" Mar 17 17:29:49.625947 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:29:49.753078 tar[1468]: linux-arm64/LICENSE Mar 17 17:29:49.753078 tar[1468]: linux-arm64/README.md Mar 17 17:29:49.765967 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:29:49.864874 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:29:49.882959 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:29:49.897811 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:29:49.902667 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:29:49.902853 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:29:49.905856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:29:49.919116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:29:49.930813 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:29:49.932938 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:29:49.934226 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:29:50.281668 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 17 17:29:50.284074 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:29:50.286985 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:29:50.296843 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:29:50.299419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:29:50.301744 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:29:50.320462 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:29:50.320682 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:29:50.322633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:29:50.325441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:29:50.819320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:29:50.821041 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:29:50.823288 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:29:50.827175 systemd[1]: Startup finished in 544ms (kernel) + 4.883s (initrd) + 3.277s (userspace) = 8.705s. Mar 17 17:29:51.249631 kubelet[1556]: E0317 17:29:51.249463 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:29:51.251120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:29:51.251252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:29:55.285224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:29:55.286308 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Mar 17 17:29:55.345936 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:55.347313 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:55.358049 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:29:55.367767 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:29:55.369365 systemd-logind[1454]: New session 1 of user core. Mar 17 17:29:55.376316 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:29:55.378412 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:29:55.384134 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:29:55.467155 systemd[1573]: Queued start job for default target default.target. Mar 17 17:29:55.479512 systemd[1573]: Created slice app.slice - User Application Slice. Mar 17 17:29:55.479573 systemd[1573]: Reached target paths.target - Paths. Mar 17 17:29:55.479586 systemd[1573]: Reached target timers.target - Timers. Mar 17 17:29:55.480729 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:29:55.489998 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:29:55.490055 systemd[1573]: Reached target sockets.target - Sockets. Mar 17 17:29:55.490067 systemd[1573]: Reached target basic.target - Basic System. Mar 17 17:29:55.490100 systemd[1573]: Reached target default.target - Main User Target. Mar 17 17:29:55.490124 systemd[1573]: Startup finished in 101ms. Mar 17 17:29:55.490400 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:29:55.491642 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:29:55.554713 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Mar 17 17:29:55.596395 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:55.597647 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:55.602616 systemd-logind[1454]: New session 2 of user core. Mar 17 17:29:55.615761 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:29:55.667025 sshd[1586]: Connection closed by 10.0.0.1 port 51864 Mar 17 17:29:55.667394 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:55.681129 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:51864.service: Deactivated successfully. Mar 17 17:29:55.682663 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:29:55.683960 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:29:55.685165 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:51870.service - OpenSSH per-connection server daemon (10.0.0.1:51870). Mar 17 17:29:55.686028 systemd-logind[1454]: Removed session 2. Mar 17 17:29:55.725079 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 51870 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:55.726181 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:55.729612 systemd-logind[1454]: New session 3 of user core. Mar 17 17:29:55.738688 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:29:55.785258 sshd[1593]: Connection closed by 10.0.0.1 port 51870 Mar 17 17:29:55.785138 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:55.799992 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:51870.service: Deactivated successfully. Mar 17 17:29:55.801469 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:29:55.804696 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:29:55.812801 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:51886.service - OpenSSH per-connection server daemon (10.0.0.1:51886). Mar 17 17:29:55.815068 systemd-logind[1454]: Removed session 3. Mar 17 17:29:55.850021 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 51886 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:55.851150 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:55.854541 systemd-logind[1454]: New session 4 of user core. Mar 17 17:29:55.869696 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:29:55.919895 sshd[1600]: Connection closed by 10.0.0.1 port 51886 Mar 17 17:29:55.920640 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:55.931882 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:51886.service: Deactivated successfully. Mar 17 17:29:55.933253 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:29:55.934450 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:29:55.935556 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:51894.service - OpenSSH per-connection server daemon (10.0.0.1:51894). Mar 17 17:29:55.936293 systemd-logind[1454]: Removed session 4. Mar 17 17:29:55.974103 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 51894 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:55.975189 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:55.978592 systemd-logind[1454]: New session 5 of user core. Mar 17 17:29:55.988688 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:29:56.049515 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:29:56.049831 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:29:56.067391 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 17 17:29:56.070594 sshd[1607]: Connection closed by 10.0.0.1 port 51894 Mar 17 17:29:56.071118 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:56.084171 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:51894.service: Deactivated successfully. Mar 17 17:29:56.086913 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:29:56.088402 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:29:56.089586 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:51902.service - OpenSSH per-connection server daemon (10.0.0.1:51902). Mar 17 17:29:56.090209 systemd-logind[1454]: Removed session 5. Mar 17 17:29:56.130213 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 51902 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:56.131364 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:56.134851 systemd-logind[1454]: New session 6 of user core. Mar 17 17:29:56.143685 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:29:56.192895 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:29:56.193167 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:29:56.196079 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 17 17:29:56.200325 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:29:56.200610 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:29:56.217822 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:29:56.239367 augenrules[1639]: No rules Mar 17 17:29:56.239957 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:29:56.240120 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:29:56.241352 sudo[1616]: pam_unix(sudo:session): session closed for user root Mar 17 17:29:56.242610 sshd[1615]: Connection closed by 10.0.0.1 port 51902 Mar 17 17:29:56.242776 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:56.253719 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:51902.service: Deactivated successfully. Mar 17 17:29:56.254978 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:29:56.256252 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:29:56.257364 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:51912.service - OpenSSH per-connection server daemon (10.0.0.1:51912). Mar 17 17:29:56.258159 systemd-logind[1454]: Removed session 6. Mar 17 17:29:56.296383 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 51912 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:56.297397 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:56.301125 systemd-logind[1454]: New session 7 of user core. Mar 17 17:29:56.313688 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:29:56.362700 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:29:56.363952 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:29:56.673772 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:29:56.673932 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:29:56.929513 dockerd[1670]: time="2025-03-17T17:29:56.929390784Z" level=info msg="Starting up" Mar 17 17:29:57.085978 dockerd[1670]: time="2025-03-17T17:29:57.085928462Z" level=info msg="Loading containers: start." Mar 17 17:29:57.249570 kernel: Initializing XFRM netlink socket Mar 17 17:29:57.325391 systemd-networkd[1397]: docker0: Link UP Mar 17 17:29:57.365895 dockerd[1670]: time="2025-03-17T17:29:57.365853620Z" level=info msg="Loading containers: done." Mar 17 17:29:57.390076 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2425646724-merged.mount: Deactivated successfully. Mar 17 17:29:57.392269 dockerd[1670]: time="2025-03-17T17:29:57.392219352Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:29:57.392356 dockerd[1670]: time="2025-03-17T17:29:57.392323875Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:29:57.392452 dockerd[1670]: time="2025-03-17T17:29:57.392425929Z" level=info msg="Daemon has completed initialization" Mar 17 17:29:57.423429 dockerd[1670]: time="2025-03-17T17:29:57.423310144Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:29:57.423689 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:29:58.199554 containerd[1470]: time="2025-03-17T17:29:58.198015727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:29:58.785998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827535933.mount: Deactivated successfully. Mar 17 17:29:59.804724 containerd[1470]: time="2025-03-17T17:29:59.804671065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:29:59.806460 containerd[1470]: time="2025-03-17T17:29:59.806377921Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 17 17:29:59.807651 containerd[1470]: time="2025-03-17T17:29:59.807621995Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:29:59.813797 containerd[1470]: time="2025-03-17T17:29:59.812768747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:29:59.813797 containerd[1470]: time="2025-03-17T17:29:59.813628472Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.614331719s" Mar 17 17:29:59.813797 containerd[1470]: time="2025-03-17T17:29:59.813655891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 17:29:59.814330 containerd[1470]: time="2025-03-17T17:29:59.814305427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:30:01.051566 containerd[1470]: time="2025-03-17T17:30:01.051510778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:01.052548 containerd[1470]: time="2025-03-17T17:30:01.052255816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 17 17:30:01.053682 containerd[1470]: time="2025-03-17T17:30:01.053634331Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:01.057586 containerd[1470]: time="2025-03-17T17:30:01.057100117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:01.057872 containerd[1470]: time="2025-03-17T17:30:01.057836503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.243493648s" Mar 17 17:30:01.057926 containerd[1470]: time="2025-03-17T17:30:01.057873344Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 17:30:01.058439 containerd[1470]: time="2025-03-17T17:30:01.058406426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:30:01.501530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:30:01.516750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:01.608808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:01.612312 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:30:01.656814 kubelet[1933]: E0317 17:30:01.656714 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:30:01.659789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:30:01.659928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:30:02.232278 containerd[1470]: time="2025-03-17T17:30:02.231223045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:02.232278 containerd[1470]: time="2025-03-17T17:30:02.231922451Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 17 17:30:02.232768 containerd[1470]: time="2025-03-17T17:30:02.232736870Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:02.235693 containerd[1470]: time="2025-03-17T17:30:02.235664064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:02.236414 containerd[1470]: time="2025-03-17T17:30:02.236380300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.177912584s" Mar 17 17:30:02.236466 containerd[1470]: time="2025-03-17T17:30:02.236415912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 17:30:02.237215 containerd[1470]: time="2025-03-17T17:30:02.237183034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:30:03.154915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611553207.mount: Deactivated successfully. Mar 17 17:30:03.479632 containerd[1470]: time="2025-03-17T17:30:03.479471922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:03.479953 containerd[1470]: time="2025-03-17T17:30:03.479908167Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 17 17:30:03.481396 containerd[1470]: time="2025-03-17T17:30:03.481347091Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:03.484336 containerd[1470]: time="2025-03-17T17:30:03.484298251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:03.484949 containerd[1470]: time="2025-03-17T17:30:03.484914745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.247698649s" Mar 17 17:30:03.485020 containerd[1470]: time="2025-03-17T17:30:03.484955669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 17:30:03.485396 containerd[1470]: time="2025-03-17T17:30:03.485373845Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:30:03.992789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179568313.mount: Deactivated successfully. Mar 17 17:30:04.610298 containerd[1470]: time="2025-03-17T17:30:04.610230820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:04.610669 containerd[1470]: time="2025-03-17T17:30:04.610594853Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 17 17:30:04.613083 containerd[1470]: time="2025-03-17T17:30:04.613027355Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:04.615937 containerd[1470]: time="2025-03-17T17:30:04.615879622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:04.617141 containerd[1470]: time="2025-03-17T17:30:04.617107282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.131698854s" Mar 17 17:30:04.617141 containerd[1470]: time="2025-03-17T17:30:04.617137043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:30:04.617573 containerd[1470]: time="2025-03-17T17:30:04.617552381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:30:05.085232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250675276.mount: Deactivated successfully. Mar 17 17:30:05.090988 containerd[1470]: time="2025-03-17T17:30:05.090940407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:05.092612 containerd[1470]: time="2025-03-17T17:30:05.092555946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 17 17:30:05.093888 containerd[1470]: time="2025-03-17T17:30:05.093847412Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:05.095855 containerd[1470]: time="2025-03-17T17:30:05.095808093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:05.096674 containerd[1470]: time="2025-03-17T17:30:05.096631005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.051973ms" Mar 17 17:30:05.096674 containerd[1470]: time="2025-03-17T17:30:05.096671424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:30:05.097469 containerd[1470]: time="2025-03-17T17:30:05.097431373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:30:05.639834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155530222.mount: Deactivated successfully. Mar 17 17:30:07.310621 containerd[1470]: time="2025-03-17T17:30:07.310561889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:07.311577 containerd[1470]: time="2025-03-17T17:30:07.311319552Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 17 17:30:07.312354 containerd[1470]: time="2025-03-17T17:30:07.312305596Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:07.315568 containerd[1470]: time="2025-03-17T17:30:07.315305197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:07.317011 containerd[1470]: time="2025-03-17T17:30:07.316655245Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.219152846s" Mar 17 17:30:07.317011 containerd[1470]: time="2025-03-17T17:30:07.316687414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 17:30:11.706841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:30:11.718872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:11.729173 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:30:11.729273 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:30:11.729567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:11.732705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:11.755757 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-7.scope)... Mar 17 17:30:11.755772 systemd[1]: Reloading... Mar 17 17:30:11.835575 zram_generator::config[2130]: No configuration found. Mar 17 17:30:11.956040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:30:12.008840 systemd[1]: Reloading finished in 252 ms. Mar 17 17:30:12.053532 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:12.056801 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:30:12.057140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:12.058720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:12.153496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:12.157777 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:30:12.196028 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:30:12.196028 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:30:12.196028 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:30:12.196376 kubelet[2177]: I0317 17:30:12.196203 2177 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:30:13.291271 kubelet[2177]: I0317 17:30:13.291218 2177 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:30:13.291271 kubelet[2177]: I0317 17:30:13.291260 2177 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:30:13.291649 kubelet[2177]: I0317 17:30:13.291509 2177 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:30:13.328913 kubelet[2177]: E0317 17:30:13.328853 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:13.330564 kubelet[2177]: I0317 17:30:13.330231 2177 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:30:13.338696 kubelet[2177]: E0317 17:30:13.338640 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:30:13.338696 kubelet[2177]: I0317 17:30:13.338683 2177 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:30:13.344103 kubelet[2177]: I0317 17:30:13.344061 2177 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:30:13.345274 kubelet[2177]: I0317 17:30:13.345243 2177 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:30:13.345444 kubelet[2177]: I0317 17:30:13.345404 2177 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:30:13.345660 kubelet[2177]: I0317 17:30:13.345439 2177 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:30:13.345753 kubelet[2177]: I0317 17:30:13.345668 2177 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:30:13.345753 kubelet[2177]: I0317 17:30:13.345678 2177 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:30:13.345950 kubelet[2177]: I0317 17:30:13.345927 2177 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:30:13.347831 kubelet[2177]: I0317 17:30:13.347803 2177 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:30:13.347831 kubelet[2177]: I0317 17:30:13.347838 2177 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:30:13.348131 kubelet[2177]: I0317 17:30:13.347931 2177 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:30:13.348131 kubelet[2177]: I0317 17:30:13.347944 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:30:13.350092 kubelet[2177]: I0317 17:30:13.350069 2177 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:30:13.351792 kubelet[2177]: I0317 17:30:13.351763 2177 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:30:13.352053 kubelet[2177]: W0317 17:30:13.352010 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:13.352173 kubelet[2177]: E0317 17:30:13.352144 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:13.352568 kubelet[2177]: W0317 17:30:13.352550 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:30:13.353234 kubelet[2177]: W0317 17:30:13.353164 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:13.353234 kubelet[2177]: E0317 17:30:13.353211 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:13.353330 kubelet[2177]: I0317 17:30:13.353293 2177 server.go:1269] "Started kubelet" Mar 17 17:30:13.354574 kubelet[2177]: I0317 17:30:13.354376 2177 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:30:13.355835 kubelet[2177]: I0317 17:30:13.355765 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:30:13.357577 kubelet[2177]: I0317 17:30:13.356357 2177 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:30:13.358956 kubelet[2177]: I0317 17:30:13.358921 2177 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:30:13.360571 kubelet[2177]: E0317 17:30:13.360552 2177 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:30:13.361444 kubelet[2177]: I0317 17:30:13.361429 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:30:13.362208 kubelet[2177]: I0317 17:30:13.362182 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:30:13.362596 kubelet[2177]: I0317 17:30:13.362582 2177 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:30:13.365562 kubelet[2177]: I0317 17:30:13.364886 2177 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:30:13.365562 kubelet[2177]: I0317 17:30:13.365198 2177 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:30:13.365562 kubelet[2177]: E0317 17:30:13.365208 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:13.365562 kubelet[2177]: I0317 17:30:13.365277 2177 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:30:13.365562 kubelet[2177]: I0317 17:30:13.365286 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:30:13.365562 kubelet[2177]: E0317 17:30:13.359589 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da754c3059789 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:30:13.353265033 +0000 UTC m=+1.192009242,LastTimestamp:2025-03-17 17:30:13.353265033 +0000 UTC m=+1.192009242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:30:13.365776 kubelet[2177]: E0317 17:30:13.365737 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Mar 17 17:30:13.365868 kubelet[2177]: W0317 17:30:13.365813 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:13.365899 kubelet[2177]: E0317 17:30:13.365872 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:13.366658 kubelet[2177]: I0317 17:30:13.366642 2177 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:30:13.378760 kubelet[2177]: I0317 17:30:13.378735 2177 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:30:13.378760 kubelet[2177]: I0317 17:30:13.378754 2177 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:30:13.378883 kubelet[2177]: I0317 17:30:13.378772 2177 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:30:13.379205 kubelet[2177]: I0317 17:30:13.379172 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:30:13.380662 kubelet[2177]: I0317 17:30:13.380636 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:30:13.380662 kubelet[2177]: I0317 17:30:13.380664 2177 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:30:13.380855 kubelet[2177]: I0317 17:30:13.380683 2177 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:30:13.380855 kubelet[2177]: E0317 17:30:13.380729 2177 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:30:13.388058 kubelet[2177]: W0317 17:30:13.387985 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:13.388161 kubelet[2177]: E0317 17:30:13.388056 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:13.402198 kubelet[2177]: I0317 17:30:13.402169 2177 policy_none.go:49] "None policy: Start" Mar 17 17:30:13.402944 kubelet[2177]: I0317 17:30:13.402929 2177 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:30:13.402995 kubelet[2177]: I0317 17:30:13.402955 2177 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:30:13.465391 kubelet[2177]: E0317 17:30:13.465360 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:13.468063 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:30:13.478609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:30:13.481021 kubelet[2177]: E0317 17:30:13.480997 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:30:13.481913 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:30:13.489499 kubelet[2177]: I0317 17:30:13.489360 2177 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:30:13.489623 kubelet[2177]: I0317 17:30:13.489588 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:30:13.489670 kubelet[2177]: I0317 17:30:13.489614 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:30:13.490624 kubelet[2177]: I0317 17:30:13.489853 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:30:13.491336 kubelet[2177]: E0317 17:30:13.491283 2177 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:30:13.566766 kubelet[2177]: E0317 17:30:13.566660 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Mar 17 17:30:13.591889 kubelet[2177]: I0317 17:30:13.591850 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:13.592266 kubelet[2177]: E0317 17:30:13.592243 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Mar 17 17:30:13.689945 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:30:13.704019 systemd[1]: Created slice kubepods-burstable-pode8d37129d548b6d9527b48040015281b.slice - libcontainer container kubepods-burstable-pode8d37129d548b6d9527b48040015281b.slice. Mar 17 17:30:13.707901 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:30:13.766559 kubelet[2177]: I0317 17:30:13.766510 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:13.766559 kubelet[2177]: I0317 17:30:13.766566 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:13.766739 kubelet[2177]: I0317 17:30:13.766594 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:13.766739 kubelet[2177]: I0317 17:30:13.766611 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:13.766739 kubelet[2177]: I0317 17:30:13.766641 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:13.766739 kubelet[2177]: I0317 17:30:13.766655 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:13.766739 kubelet[2177]: I0317 17:30:13.766670 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:13.766864 kubelet[2177]: I0317 17:30:13.766688 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:30:13.766864 kubelet[2177]: I0317 17:30:13.766746 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:13.793949 kubelet[2177]: I0317 17:30:13.793892 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:13.794203 kubelet[2177]: E0317 17:30:13.794183 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Mar 17 17:30:13.967559 kubelet[2177]: E0317 17:30:13.967431 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Mar 17 17:30:14.001920 kubelet[2177]: E0317 17:30:14.001880 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.002748 containerd[1470]: time="2025-03-17T17:30:14.002709767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:14.007312 kubelet[2177]: E0317 17:30:14.007280 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.007718 containerd[1470]: time="2025-03-17T17:30:14.007680175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e8d37129d548b6d9527b48040015281b,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:14.010211 kubelet[2177]: E0317 17:30:14.010137 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.010560 containerd[1470]: time="2025-03-17T17:30:14.010516790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:14.195224 kubelet[2177]: I0317 17:30:14.195175 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:14.195506 kubelet[2177]: E0317 17:30:14.195483 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Mar 17 17:30:14.474120 kubelet[2177]: W0317 17:30:14.474065 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:14.474425 kubelet[2177]: E0317 17:30:14.474130 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:14.514195 kubelet[2177]: W0317 17:30:14.514151 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:14.514195 kubelet[2177]: E0317 17:30:14.514198 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:14.536105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251244417.mount: Deactivated successfully. Mar 17 17:30:14.539980 containerd[1470]: time="2025-03-17T17:30:14.539937790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:30:14.542779 containerd[1470]: time="2025-03-17T17:30:14.542742370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:30:14.546582 containerd[1470]: time="2025-03-17T17:30:14.545070852Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:30:14.546822 containerd[1470]: time="2025-03-17T17:30:14.546787424Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:30:14.547162 containerd[1470]: time="2025-03-17T17:30:14.547081016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:30:14.547849 containerd[1470]: time="2025-03-17T17:30:14.547611558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:30:14.548306 containerd[1470]: time="2025-03-17T17:30:14.548258139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:30:14.550764 containerd[1470]: time="2025-03-17T17:30:14.550735654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.135579ms" Mar 17 17:30:14.551161 containerd[1470]: time="2025-03-17T17:30:14.551139172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:30:14.552069 containerd[1470]: time="2025-03-17T17:30:14.552046910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.255017ms" Mar 17 17:30:14.555829 containerd[1470]: time="2025-03-17T17:30:14.555693918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.941443ms" Mar 17 17:30:14.635370 kubelet[2177]: W0317 17:30:14.635328 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:14.635481 kubelet[2177]: E0317 17:30:14.635381 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:14.663756 kubelet[2177]: W0317 17:30:14.663675 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Mar 17 17:30:14.663756 kubelet[2177]: E0317 17:30:14.663722 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:30:14.701052 containerd[1470]: time="2025-03-17T17:30:14.700763286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:14.701220 containerd[1470]: time="2025-03-17T17:30:14.701155221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:14.701220 containerd[1470]: time="2025-03-17T17:30:14.701038903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:14.701220 containerd[1470]: time="2025-03-17T17:30:14.701202276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:14.701315 containerd[1470]: time="2025-03-17T17:30:14.701226921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.701533 containerd[1470]: time="2025-03-17T17:30:14.701391892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:14.701533 containerd[1470]: time="2025-03-17T17:30:14.701440504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:14.701533 containerd[1470]: time="2025-03-17T17:30:14.701455923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.701533 containerd[1470]: time="2025-03-17T17:30:14.701203474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.701747 containerd[1470]: time="2025-03-17T17:30:14.701688240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.702383 containerd[1470]: time="2025-03-17T17:30:14.702118841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.702383 containerd[1470]: time="2025-03-17T17:30:14.702288725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:14.723731 systemd[1]: Started cri-containerd-07ca6b72a95fd3ed5c390e7681980d2257518d440394c22bf59f61d5fee7e158.scope - libcontainer container 07ca6b72a95fd3ed5c390e7681980d2257518d440394c22bf59f61d5fee7e158. Mar 17 17:30:14.728403 systemd[1]: Started cri-containerd-11c7ad7d6b8aaaf330a00bfebfe5428be56861f7718e057130a7dc69da9191f8.scope - libcontainer container 11c7ad7d6b8aaaf330a00bfebfe5428be56861f7718e057130a7dc69da9191f8. Mar 17 17:30:14.729448 systemd[1]: Started cri-containerd-2050795a028155cb92ed97667e14b98438b6de40dfc34182adfbf76c5c1bb424.scope - libcontainer container 2050795a028155cb92ed97667e14b98438b6de40dfc34182adfbf76c5c1bb424. Mar 17 17:30:14.763478 containerd[1470]: time="2025-03-17T17:30:14.763418670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"07ca6b72a95fd3ed5c390e7681980d2257518d440394c22bf59f61d5fee7e158\"" Mar 17 17:30:14.764936 kubelet[2177]: E0317 17:30:14.764907 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.768110 kubelet[2177]: E0317 17:30:14.767881 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Mar 17 17:30:14.768454 containerd[1470]: time="2025-03-17T17:30:14.768400062Z" level=info msg="CreateContainer within sandbox \"07ca6b72a95fd3ed5c390e7681980d2257518d440394c22bf59f61d5fee7e158\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:30:14.769825 containerd[1470]: time="2025-03-17T17:30:14.769789370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c7ad7d6b8aaaf330a00bfebfe5428be56861f7718e057130a7dc69da9191f8\"" Mar 17 17:30:14.770525 kubelet[2177]: E0317 17:30:14.770506 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.773794 containerd[1470]: time="2025-03-17T17:30:14.773664101Z" level=info msg="CreateContainer within sandbox \"11c7ad7d6b8aaaf330a00bfebfe5428be56861f7718e057130a7dc69da9191f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:30:14.776189 containerd[1470]: time="2025-03-17T17:30:14.776164104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e8d37129d548b6d9527b48040015281b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2050795a028155cb92ed97667e14b98438b6de40dfc34182adfbf76c5c1bb424\"" Mar 17 17:30:14.777026 kubelet[2177]: E0317 17:30:14.777008 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:14.778635 containerd[1470]: time="2025-03-17T17:30:14.778608625Z" level=info msg="CreateContainer within sandbox \"2050795a028155cb92ed97667e14b98438b6de40dfc34182adfbf76c5c1bb424\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:30:14.786203 containerd[1470]: time="2025-03-17T17:30:14.786161041Z" level=info msg="CreateContainer within sandbox \"07ca6b72a95fd3ed5c390e7681980d2257518d440394c22bf59f61d5fee7e158\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc118180a5b7d1358986e620ed51bf8ea8329d106e8bf1e04c0be3d88e228a9d\"" Mar 17 17:30:14.786735 containerd[1470]: time="2025-03-17T17:30:14.786707881Z" level=info msg="StartContainer for \"fc118180a5b7d1358986e620ed51bf8ea8329d106e8bf1e04c0be3d88e228a9d\"" Mar 17 17:30:14.789890 containerd[1470]: time="2025-03-17T17:30:14.789720731Z" level=info msg="CreateContainer within sandbox \"11c7ad7d6b8aaaf330a00bfebfe5428be56861f7718e057130a7dc69da9191f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"326f22c1f31e4e3ca9c6a5be89350c9b7eb653d77807a06c56c3e70ace4f9b2c\"" Mar 17 17:30:14.790228 containerd[1470]: time="2025-03-17T17:30:14.790185205Z" level=info msg="StartContainer for \"326f22c1f31e4e3ca9c6a5be89350c9b7eb653d77807a06c56c3e70ace4f9b2c\"" Mar 17 17:30:14.792951 containerd[1470]: time="2025-03-17T17:30:14.792907299Z" level=info msg="CreateContainer within sandbox \"2050795a028155cb92ed97667e14b98438b6de40dfc34182adfbf76c5c1bb424\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c89a5780d16fa396aed18bde341b39ea28747e9b9bd745f7e6046e2bd571715c\"" Mar 17 17:30:14.793459 containerd[1470]: time="2025-03-17T17:30:14.793438920Z" level=info msg="StartContainer for \"c89a5780d16fa396aed18bde341b39ea28747e9b9bd745f7e6046e2bd571715c\"" Mar 17 17:30:14.817752 systemd[1]: Started cri-containerd-fc118180a5b7d1358986e620ed51bf8ea8329d106e8bf1e04c0be3d88e228a9d.scope - libcontainer container fc118180a5b7d1358986e620ed51bf8ea8329d106e8bf1e04c0be3d88e228a9d. Mar 17 17:30:14.821510 systemd[1]: Started cri-containerd-326f22c1f31e4e3ca9c6a5be89350c9b7eb653d77807a06c56c3e70ace4f9b2c.scope - libcontainer container 326f22c1f31e4e3ca9c6a5be89350c9b7eb653d77807a06c56c3e70ace4f9b2c. Mar 17 17:30:14.822845 systemd[1]: Started cri-containerd-c89a5780d16fa396aed18bde341b39ea28747e9b9bd745f7e6046e2bd571715c.scope - libcontainer container c89a5780d16fa396aed18bde341b39ea28747e9b9bd745f7e6046e2bd571715c. Mar 17 17:30:14.852820 containerd[1470]: time="2025-03-17T17:30:14.852769527Z" level=info msg="StartContainer for \"fc118180a5b7d1358986e620ed51bf8ea8329d106e8bf1e04c0be3d88e228a9d\" returns successfully" Mar 17 17:30:14.874585 containerd[1470]: time="2025-03-17T17:30:14.874520757Z" level=info msg="StartContainer for \"326f22c1f31e4e3ca9c6a5be89350c9b7eb653d77807a06c56c3e70ace4f9b2c\" returns successfully" Mar 17 17:30:14.874807 containerd[1470]: time="2025-03-17T17:30:14.874557985Z" level=info msg="StartContainer for \"c89a5780d16fa396aed18bde341b39ea28747e9b9bd745f7e6046e2bd571715c\" returns successfully" Mar 17 17:30:14.997395 kubelet[2177]: I0317 17:30:14.997173 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:14.998074 kubelet[2177]: E0317 17:30:14.997982 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Mar 17 17:30:15.388321 kubelet[2177]: E0317 17:30:15.388167 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:15.391289 kubelet[2177]: E0317 17:30:15.390970 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:15.393117 kubelet[2177]: E0317 17:30:15.393075 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:16.394285 kubelet[2177]: E0317 17:30:16.394229 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:16.567393 kubelet[2177]: E0317 17:30:16.567162 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:30:16.600379 kubelet[2177]: I0317 17:30:16.600093 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:16.633895 kubelet[2177]: I0317 17:30:16.633748 2177 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:30:16.633895 kubelet[2177]: E0317 17:30:16.633820 2177 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:30:16.654808 kubelet[2177]: E0317 17:30:16.654649 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:16.755513 kubelet[2177]: E0317 17:30:16.755447 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:16.856759 kubelet[2177]: E0317 17:30:16.856719 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:16.957623 kubelet[2177]: E0317 17:30:16.957492 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:17.350336 kubelet[2177]: I0317 17:30:17.349969 2177 apiserver.go:52] "Watching apiserver" Mar 17 17:30:17.365276 kubelet[2177]: I0317 17:30:17.365188 2177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:30:18.609614 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Mar 17 17:30:18.609630 systemd[1]: Reloading... Mar 17 17:30:18.667569 zram_generator::config[2493]: No configuration found. Mar 17 17:30:18.753231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:30:18.816609 systemd[1]: Reloading finished in 206 ms. Mar 17 17:30:18.849490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:18.866666 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:30:18.866861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:18.866911 systemd[1]: kubelet.service: Consumed 1.526s CPU time, 120.9M memory peak, 0B memory swap peak. Mar 17 17:30:18.877861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:30:18.965371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:30:18.969372 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:30:19.001150 kubelet[2536]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:30:19.001150 kubelet[2536]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:30:19.001150 kubelet[2536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:30:19.001474 kubelet[2536]: I0317 17:30:19.001142 2536 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:30:19.009973 kubelet[2536]: I0317 17:30:19.009409 2536 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:30:19.009973 kubelet[2536]: I0317 17:30:19.009433 2536 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:30:19.009973 kubelet[2536]: I0317 17:30:19.009641 2536 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:30:19.012435 kubelet[2536]: I0317 17:30:19.012402 2536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:30:19.014797 kubelet[2536]: I0317 17:30:19.014306 2536 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:30:19.017366 kubelet[2536]: E0317 17:30:19.017339 2536 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:30:19.017441 kubelet[2536]: I0317 17:30:19.017369 2536 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:30:19.019645 kubelet[2536]: I0317 17:30:19.019623 2536 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:30:19.019803 kubelet[2536]: I0317 17:30:19.019761 2536 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:30:19.020320 kubelet[2536]: I0317 17:30:19.019848 2536 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:30:19.021457 kubelet[2536]: I0317 17:30:19.020286 2536 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:30:19.021457 kubelet[2536]: I0317 17:30:19.021353 2536 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:30:19.021457 kubelet[2536]: I0317 17:30:19.021363 2536 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:30:19.021844 kubelet[2536]: I0317 17:30:19.021819 2536 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:30:19.022084 kubelet[2536]: I0317 17:30:19.022029 2536 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:30:19.022084 kubelet[2536]: I0317 17:30:19.022048 2536 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:30:19.022177 kubelet[2536]: I0317 17:30:19.022069 2536 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:30:19.022227 kubelet[2536]: I0317 17:30:19.022219 2536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:30:19.023447 kubelet[2536]: I0317 17:30:19.023429 2536 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:30:19.025159 kubelet[2536]: I0317 17:30:19.025135 2536 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:30:19.025705 kubelet[2536]: I0317 17:30:19.025653 2536 server.go:1269] "Started kubelet" Mar 17 17:30:19.027107 kubelet[2536]: I0317 17:30:19.026915 2536 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:30:19.027427 kubelet[2536]: I0317 17:30:19.027398 2536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:30:19.027713 kubelet[2536]: I0317 17:30:19.027631 2536 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:30:19.028035 kubelet[2536]: I0317 17:30:19.027917 2536 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:30:19.029053 kubelet[2536]: I0317 17:30:19.029028 2536 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:30:19.035501 kubelet[2536]: I0317 17:30:19.035330 2536 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:30:19.037467 kubelet[2536]: I0317 17:30:19.037450 2536 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:30:19.038078 kubelet[2536]: I0317 17:30:19.037947 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:30:19.038821 kubelet[2536]: I0317 17:30:19.038805 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:30:19.039155 kubelet[2536]: I0317 17:30:19.038890 2536 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:30:19.039155 kubelet[2536]: I0317 17:30:19.038920 2536 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:30:19.039155 kubelet[2536]: E0317 17:30:19.038957 2536 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:30:19.039841 kubelet[2536]: E0317 17:30:19.039817 2536 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:30:19.040304 kubelet[2536]: I0317 17:30:19.040287 2536 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:30:19.040895 kubelet[2536]: I0317 17:30:19.040877 2536 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:30:19.044810 kubelet[2536]: E0317 17:30:19.044781 2536 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:30:19.052349 kubelet[2536]: I0317 17:30:19.051178 2536 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:30:19.055062 kubelet[2536]: I0317 17:30:19.055040 2536 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:30:19.057565 kubelet[2536]: I0317 17:30:19.057344 2536 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:30:19.083556 kubelet[2536]: I0317 17:30:19.083517 2536 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:30:19.083556 kubelet[2536]: I0317 17:30:19.083546 2536 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:30:19.083556 kubelet[2536]: I0317 17:30:19.083565 2536 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:30:19.083723 kubelet[2536]: I0317 17:30:19.083708 2536 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:30:19.083750 kubelet[2536]: I0317 17:30:19.083724 2536 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:30:19.083750 kubelet[2536]: I0317 17:30:19.083741 2536 policy_none.go:49] "None policy: Start" Mar 17 17:30:19.084323 kubelet[2536]: I0317 17:30:19.084291 2536 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:30:19.084323 kubelet[2536]: I0317 17:30:19.084317 2536 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:30:19.084481 kubelet[2536]: I0317 17:30:19.084462 2536 state_mem.go:75] "Updated machine memory state" Mar 17 17:30:19.087952 kubelet[2536]: I0317 17:30:19.087835 2536 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:30:19.088070 kubelet[2536]: I0317 17:30:19.088009 2536 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:30:19.088070 kubelet[2536]: I0317 17:30:19.088021 2536 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:30:19.088193 kubelet[2536]: I0317 17:30:19.088176 2536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:30:19.141764 kubelet[2536]: I0317 17:30:19.141661 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:19.141764 kubelet[2536]: I0317 17:30:19.141697 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:19.141764 kubelet[2536]: I0317 17:30:19.141718 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:19.141764 kubelet[2536]: I0317 17:30:19.141734 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:30:19.141764 kubelet[2536]: I0317 17:30:19.141752 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:19.141969 kubelet[2536]: I0317 17:30:19.141767 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8d37129d548b6d9527b48040015281b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e8d37129d548b6d9527b48040015281b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:19.141969 kubelet[2536]: I0317 17:30:19.141781 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:19.141969 kubelet[2536]: I0317 17:30:19.141795 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:19.141969 kubelet[2536]: I0317 17:30:19.141811 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:30:19.191545 kubelet[2536]: I0317 17:30:19.191502 2536 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:30:19.197645 kubelet[2536]: I0317 17:30:19.197607 2536 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:30:19.197733 kubelet[2536]: I0317 17:30:19.197681 2536 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:30:19.447850 kubelet[2536]: E0317 17:30:19.447746 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:19.448567 kubelet[2536]: E0317 17:30:19.448122 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:19.448567 kubelet[2536]: E0317 17:30:19.448295 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:19.611776 sudo[2574]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:30:19.612062 sudo[2574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:30:20.023144 kubelet[2536]: I0317 17:30:20.022733 2536 apiserver.go:52] "Watching apiserver" Mar 17 17:30:20.040669 kubelet[2536]: I0317 17:30:20.040621 2536 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:30:20.045738 sudo[2574]: pam_unix(sudo:session): session closed for user root Mar 17 17:30:20.067990 kubelet[2536]: E0317 17:30:20.065863 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:20.067990 kubelet[2536]: E0317 17:30:20.066309 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:20.072670 kubelet[2536]: E0317 17:30:20.072638 2536 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:30:20.072862 kubelet[2536]: E0317 17:30:20.072773 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:20.092974 kubelet[2536]: I0317 17:30:20.092866 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.092848938 podStartE2EDuration="1.092848938s" podCreationTimestamp="2025-03-17 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:20.085706839 +0000 UTC m=+1.112532299" watchObservedRunningTime="2025-03-17 17:30:20.092848938 +0000 UTC m=+1.119674398" Mar 17 17:30:20.103051 kubelet[2536]: I0317 17:30:20.103002 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.102978219 podStartE2EDuration="1.102978219s" podCreationTimestamp="2025-03-17 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:20.102792314 +0000 UTC m=+1.129617774" watchObservedRunningTime="2025-03-17 17:30:20.102978219 +0000 UTC m=+1.129803639" Mar 17 17:30:20.103186 kubelet[2536]: I0317 17:30:20.103102 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.103096227 podStartE2EDuration="1.103096227s" podCreationTimestamp="2025-03-17 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:20.093072687 +0000 UTC m=+1.119898107" watchObservedRunningTime="2025-03-17 17:30:20.103096227 +0000 UTC m=+1.129921687" Mar 17 17:30:21.066324 kubelet[2536]: E0317 17:30:21.066295 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:22.385325 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 17 17:30:22.386567 sshd[1649]: Connection closed by 10.0.0.1 port 51912 Mar 17 17:30:22.387321 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:22.392164 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:51912.service: Deactivated successfully. Mar 17 17:30:22.394202 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:30:22.394414 systemd[1]: session-7.scope: Consumed 7.154s CPU time, 155.1M memory peak, 0B memory swap peak. Mar 17 17:30:22.395000 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:30:22.396023 systemd-logind[1454]: Removed session 7. Mar 17 17:30:24.579329 kubelet[2536]: E0317 17:30:24.579290 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:24.643399 kubelet[2536]: I0317 17:30:24.643371 2536 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:30:24.643933 containerd[1470]: time="2025-03-17T17:30:24.643840670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:30:24.644342 kubelet[2536]: I0317 17:30:24.644026 2536 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:30:25.073273 kubelet[2536]: E0317 17:30:25.073156 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.371942 systemd[1]: Created slice kubepods-burstable-podc9b858bb_8376_486b_86d0_106d3671368c.slice - libcontainer container kubepods-burstable-podc9b858bb_8376_486b_86d0_106d3671368c.slice. Mar 17 17:30:25.377399 systemd[1]: Created slice kubepods-besteffort-podbe7173c8_f86c_4605_9450_e74cd0e3bc76.slice - libcontainer container kubepods-besteffort-podbe7173c8_f86c_4605_9450_e74cd0e3bc76.slice. Mar 17 17:30:25.389435 kubelet[2536]: I0317 17:30:25.389398 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9b858bb-8376-486b-86d0-106d3671368c-clustermesh-secrets\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.389842 kubelet[2536]: I0317 17:30:25.389678 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-cgroup\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.389842 kubelet[2536]: I0317 17:30:25.389709 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhtp\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-kube-api-access-jnhtp\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.389842 kubelet[2536]: I0317 17:30:25.389761 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be7173c8-f86c-4605-9450-e74cd0e3bc76-kube-proxy\") pod \"kube-proxy-bftd6\" (UID: \"be7173c8-f86c-4605-9450-e74cd0e3bc76\") " pod="kube-system/kube-proxy-bftd6" Mar 17 17:30:25.389842 kubelet[2536]: I0317 17:30:25.389779 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cni-path\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.389842 kubelet[2536]: I0317 17:30:25.389794 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-etc-cni-netd\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390388 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-kernel\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390425 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-lib-modules\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390445 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-hostproc\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390463 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9b858bb-8376-486b-86d0-106d3671368c-cilium-config-path\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390479 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7173c8-f86c-4605-9450-e74cd0e3bc76-xtables-lock\") pod \"kube-proxy-bftd6\" (UID: \"be7173c8-f86c-4605-9450-e74cd0e3bc76\") " pod="kube-system/kube-proxy-bftd6" Mar 17 17:30:25.390678 kubelet[2536]: I0317 17:30:25.390493 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-xtables-lock\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390509 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-net\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390534 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7173c8-f86c-4605-9450-e74cd0e3bc76-lib-modules\") pod \"kube-proxy-bftd6\" (UID: \"be7173c8-f86c-4605-9450-e74cd0e3bc76\") " pod="kube-system/kube-proxy-bftd6" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390583 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-run\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390600 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-bpf-maps\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390614 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-hubble-tls\") pod \"cilium-h7sgk\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " pod="kube-system/cilium-h7sgk" Mar 17 17:30:25.391309 kubelet[2536]: I0317 17:30:25.390628 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhfp\" (UniqueName: \"kubernetes.io/projected/be7173c8-f86c-4605-9450-e74cd0e3bc76-kube-api-access-qhhfp\") pod \"kube-proxy-bftd6\" (UID: \"be7173c8-f86c-4605-9450-e74cd0e3bc76\") " pod="kube-system/kube-proxy-bftd6" Mar 17 17:30:25.532007 systemd[1]: Created slice kubepods-besteffort-podb217fce8_6cdf_4c9f_8548_8c66257ee38e.slice - libcontainer container kubepods-besteffort-podb217fce8_6cdf_4c9f_8548_8c66257ee38e.slice. Mar 17 17:30:25.592586 kubelet[2536]: I0317 17:30:25.592471 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b217fce8-6cdf-4c9f-8548-8c66257ee38e-cilium-config-path\") pod \"cilium-operator-5d85765b45-kqmd6\" (UID: \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\") " pod="kube-system/cilium-operator-5d85765b45-kqmd6" Mar 17 17:30:25.592586 kubelet[2536]: I0317 17:30:25.592521 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmh8t\" (UniqueName: \"kubernetes.io/projected/b217fce8-6cdf-4c9f-8548-8c66257ee38e-kube-api-access-lmh8t\") pod \"cilium-operator-5d85765b45-kqmd6\" (UID: \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\") " pod="kube-system/cilium-operator-5d85765b45-kqmd6" Mar 17 17:30:25.675726 kubelet[2536]: E0317 17:30:25.675624 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.676898 containerd[1470]: time="2025-03-17T17:30:25.676767758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7sgk,Uid:c9b858bb-8376-486b-86d0-106d3671368c,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:25.688597 kubelet[2536]: E0317 17:30:25.688566 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.689371 containerd[1470]: time="2025-03-17T17:30:25.689072070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bftd6,Uid:be7173c8-f86c-4605-9450-e74cd0e3bc76,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:25.696563 containerd[1470]: time="2025-03-17T17:30:25.696073046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:25.696563 containerd[1470]: time="2025-03-17T17:30:25.696377398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:25.696563 containerd[1470]: time="2025-03-17T17:30:25.696394946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.697562 containerd[1470]: time="2025-03-17T17:30:25.696560033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.713978 containerd[1470]: time="2025-03-17T17:30:25.713162129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:25.713978 containerd[1470]: time="2025-03-17T17:30:25.713216132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:25.713978 containerd[1470]: time="2025-03-17T17:30:25.713230922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.713978 containerd[1470]: time="2025-03-17T17:30:25.713311547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.720700 systemd[1]: Started cri-containerd-7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105.scope - libcontainer container 7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105. Mar 17 17:30:25.726279 systemd[1]: Started cri-containerd-3a316875d51c4285d534dfc596cd1da6cfd72d2810ccdbf72af5b44511e1b109.scope - libcontainer container 3a316875d51c4285d534dfc596cd1da6cfd72d2810ccdbf72af5b44511e1b109. Mar 17 17:30:25.747948 containerd[1470]: time="2025-03-17T17:30:25.747904669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7sgk,Uid:c9b858bb-8376-486b-86d0-106d3671368c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\"" Mar 17 17:30:25.748890 kubelet[2536]: E0317 17:30:25.748797 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.750336 containerd[1470]: time="2025-03-17T17:30:25.750308426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:30:25.762308 containerd[1470]: time="2025-03-17T17:30:25.762270573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bftd6,Uid:be7173c8-f86c-4605-9450-e74cd0e3bc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a316875d51c4285d534dfc596cd1da6cfd72d2810ccdbf72af5b44511e1b109\"" Mar 17 17:30:25.763524 kubelet[2536]: E0317 17:30:25.763302 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.765799 containerd[1470]: time="2025-03-17T17:30:25.765763666Z" level=info msg="CreateContainer within sandbox \"3a316875d51c4285d534dfc596cd1da6cfd72d2810ccdbf72af5b44511e1b109\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:30:25.805912 containerd[1470]: time="2025-03-17T17:30:25.805854591Z" level=info msg="CreateContainer within sandbox \"3a316875d51c4285d534dfc596cd1da6cfd72d2810ccdbf72af5b44511e1b109\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3dde9692b73c7ff68562e32b7b2d6775492e951cf70a10656236f0d23371533a\"" Mar 17 17:30:25.807302 containerd[1470]: time="2025-03-17T17:30:25.806702052Z" level=info msg="StartContainer for \"3dde9692b73c7ff68562e32b7b2d6775492e951cf70a10656236f0d23371533a\"" Mar 17 17:30:25.832713 systemd[1]: Started cri-containerd-3dde9692b73c7ff68562e32b7b2d6775492e951cf70a10656236f0d23371533a.scope - libcontainer container 3dde9692b73c7ff68562e32b7b2d6775492e951cf70a10656236f0d23371533a. Mar 17 17:30:25.834160 kubelet[2536]: E0317 17:30:25.833919 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:25.834405 containerd[1470]: time="2025-03-17T17:30:25.834356955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kqmd6,Uid:b217fce8-6cdf-4c9f-8548-8c66257ee38e,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:25.864622 containerd[1470]: time="2025-03-17T17:30:25.864390913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:25.864622 containerd[1470]: time="2025-03-17T17:30:25.864450472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:25.864622 containerd[1470]: time="2025-03-17T17:30:25.864465981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.864622 containerd[1470]: time="2025-03-17T17:30:25.864548125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:25.866234 containerd[1470]: time="2025-03-17T17:30:25.866197838Z" level=info msg="StartContainer for \"3dde9692b73c7ff68562e32b7b2d6775492e951cf70a10656236f0d23371533a\" returns successfully" Mar 17 17:30:25.891735 systemd[1]: Started cri-containerd-e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c.scope - libcontainer container e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c. Mar 17 17:30:25.925256 containerd[1470]: time="2025-03-17T17:30:25.925218388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kqmd6,Uid:b217fce8-6cdf-4c9f-8548-8c66257ee38e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\"" Mar 17 17:30:25.926130 kubelet[2536]: E0317 17:30:25.926049 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:26.076212 kubelet[2536]: E0317 17:30:26.076182 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:26.793817 kubelet[2536]: E0317 17:30:26.793780 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:26.809576 kubelet[2536]: I0317 17:30:26.809381 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bftd6" podStartSLOduration=1.809365685 podStartE2EDuration="1.809365685s" podCreationTimestamp="2025-03-17 17:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:26.085620659 +0000 UTC m=+7.112446199" watchObservedRunningTime="2025-03-17 17:30:26.809365685 +0000 UTC m=+7.836191105" Mar 17 17:30:27.081428 kubelet[2536]: E0317 17:30:27.081320 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:28.438477 kubelet[2536]: E0317 17:30:28.438369 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:29.084581 kubelet[2536]: E0317 17:30:29.084313 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:35.122713 update_engine[1460]: I20250317 17:30:35.122619 1460 update_attempter.cc:509] Updating boot flags... Mar 17 17:30:35.163979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2918) Mar 17 17:30:35.473693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927503005.mount: Deactivated successfully. Mar 17 17:30:36.826711 containerd[1470]: time="2025-03-17T17:30:36.826653465Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:36.827268 containerd[1470]: time="2025-03-17T17:30:36.827218515Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:30:36.828224 containerd[1470]: time="2025-03-17T17:30:36.828196867Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:36.829926 containerd[1470]: time="2025-03-17T17:30:36.829897016Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.079552291s" Mar 17 17:30:36.829993 containerd[1470]: time="2025-03-17T17:30:36.829932884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:30:36.841783 containerd[1470]: time="2025-03-17T17:30:36.841742917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:30:36.854323 containerd[1470]: time="2025-03-17T17:30:36.854246758Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:30:36.923530 containerd[1470]: time="2025-03-17T17:30:36.923476187Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\"" Mar 17 17:30:36.924141 containerd[1470]: time="2025-03-17T17:30:36.923949228Z" level=info msg="StartContainer for \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\"" Mar 17 17:30:36.950741 systemd[1]: Started cri-containerd-18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545.scope - libcontainer container 18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545. Mar 17 17:30:36.977164 containerd[1470]: time="2025-03-17T17:30:36.972256004Z" level=info msg="StartContainer for \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\" returns successfully" Mar 17 17:30:37.038997 systemd[1]: cri-containerd-18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545.scope: Deactivated successfully. Mar 17 17:30:37.202499 kubelet[2536]: E0317 17:30:37.201965 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:37.246191 containerd[1470]: time="2025-03-17T17:30:37.240495433Z" level=info msg="shim disconnected" id=18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545 namespace=k8s.io Mar 17 17:30:37.246191 containerd[1470]: time="2025-03-17T17:30:37.246187320Z" level=warning msg="cleaning up after shim disconnected" id=18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545 namespace=k8s.io Mar 17 17:30:37.246191 containerd[1470]: time="2025-03-17T17:30:37.246202396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:30:37.917491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545-rootfs.mount: Deactivated successfully. Mar 17 17:30:38.112771 kubelet[2536]: E0317 17:30:38.112479 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:38.116129 containerd[1470]: time="2025-03-17T17:30:38.115954280Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:30:38.137091 containerd[1470]: time="2025-03-17T17:30:38.136965958Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\"" Mar 17 17:30:38.138634 containerd[1470]: time="2025-03-17T17:30:38.138590118Z" level=info msg="StartContainer for \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\"" Mar 17 17:30:38.166743 systemd[1]: Started cri-containerd-1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68.scope - libcontainer container 1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68. Mar 17 17:30:38.193651 containerd[1470]: time="2025-03-17T17:30:38.193153813Z" level=info msg="StartContainer for \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\" returns successfully" Mar 17 17:30:38.208289 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:30:38.208522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:30:38.209018 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:30:38.215849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:30:38.216043 systemd[1]: cri-containerd-1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68.scope: Deactivated successfully. Mar 17 17:30:38.235134 containerd[1470]: time="2025-03-17T17:30:38.235077358Z" level=info msg="shim disconnected" id=1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68 namespace=k8s.io Mar 17 17:30:38.235432 containerd[1470]: time="2025-03-17T17:30:38.235412659Z" level=warning msg="cleaning up after shim disconnected" id=1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68 namespace=k8s.io Mar 17 17:30:38.235493 containerd[1470]: time="2025-03-17T17:30:38.235480599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:30:38.241177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:30:38.915426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68-rootfs.mount: Deactivated successfully. Mar 17 17:30:39.119815 kubelet[2536]: E0317 17:30:39.119638 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:39.127446 containerd[1470]: time="2025-03-17T17:30:39.127176017Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:30:39.141447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958160215.mount: Deactivated successfully. Mar 17 17:30:39.144037 containerd[1470]: time="2025-03-17T17:30:39.143979847Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\"" Mar 17 17:30:39.145959 containerd[1470]: time="2025-03-17T17:30:39.144654340Z" level=info msg="StartContainer for \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\"" Mar 17 17:30:39.170811 systemd[1]: Started cri-containerd-072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa.scope - libcontainer container 072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa. Mar 17 17:30:39.210611 containerd[1470]: time="2025-03-17T17:30:39.210535989Z" level=info msg="StartContainer for \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\" returns successfully" Mar 17 17:30:39.221856 systemd[1]: cri-containerd-072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa.scope: Deactivated successfully. Mar 17 17:30:39.246192 containerd[1470]: time="2025-03-17T17:30:39.246127940Z" level=info msg="shim disconnected" id=072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa namespace=k8s.io Mar 17 17:30:39.246192 containerd[1470]: time="2025-03-17T17:30:39.246179606Z" level=warning msg="cleaning up after shim disconnected" id=072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa namespace=k8s.io Mar 17 17:30:39.246192 containerd[1470]: time="2025-03-17T17:30:39.246188484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:30:39.915500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa-rootfs.mount: Deactivated successfully. Mar 17 17:30:40.122978 kubelet[2536]: E0317 17:30:40.122931 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:40.127002 containerd[1470]: time="2025-03-17T17:30:40.126956624Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:30:40.137157 containerd[1470]: time="2025-03-17T17:30:40.137112230Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\"" Mar 17 17:30:40.138826 containerd[1470]: time="2025-03-17T17:30:40.138797152Z" level=info msg="StartContainer for \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\"" Mar 17 17:30:40.172700 systemd[1]: Started cri-containerd-f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95.scope - libcontainer container f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95. Mar 17 17:30:40.191505 systemd[1]: cri-containerd-f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95.scope: Deactivated successfully. Mar 17 17:30:40.193497 containerd[1470]: time="2025-03-17T17:30:40.193452614Z" level=info msg="StartContainer for \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\" returns successfully" Mar 17 17:30:40.215654 containerd[1470]: time="2025-03-17T17:30:40.215586552Z" level=info msg="shim disconnected" id=f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95 namespace=k8s.io Mar 17 17:30:40.215654 containerd[1470]: time="2025-03-17T17:30:40.215641657Z" level=warning msg="cleaning up after shim disconnected" id=f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95 namespace=k8s.io Mar 17 17:30:40.215654 containerd[1470]: time="2025-03-17T17:30:40.215649895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:30:40.915562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95-rootfs.mount: Deactivated successfully. Mar 17 17:30:40.999642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983737026.mount: Deactivated successfully. Mar 17 17:30:41.162271 kubelet[2536]: E0317 17:30:41.162234 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:41.170471 containerd[1470]: time="2025-03-17T17:30:41.169838249Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:30:41.227284 containerd[1470]: time="2025-03-17T17:30:41.227236889Z" level=info msg="CreateContainer within sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\"" Mar 17 17:30:41.227965 containerd[1470]: time="2025-03-17T17:30:41.227943198Z" level=info msg="StartContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\"" Mar 17 17:30:41.254718 systemd[1]: Started cri-containerd-1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7.scope - libcontainer container 1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7. Mar 17 17:30:41.300339 containerd[1470]: time="2025-03-17T17:30:41.300291962Z" level=info msg="StartContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" returns successfully" Mar 17 17:30:41.479473 kubelet[2536]: I0317 17:30:41.479177 2536 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:30:41.555633 systemd[1]: Created slice kubepods-burstable-pod527b1255_108e_4a7c_a0e9_eecbc925cda8.slice - libcontainer container kubepods-burstable-pod527b1255_108e_4a7c_a0e9_eecbc925cda8.slice. Mar 17 17:30:41.562829 systemd[1]: Created slice kubepods-burstable-pod16ed53b1_4c77_48a1_b301_1bc4900f0fec.slice - libcontainer container kubepods-burstable-pod16ed53b1_4c77_48a1_b301_1bc4900f0fec.slice. Mar 17 17:30:41.614579 containerd[1470]: time="2025-03-17T17:30:41.614512983Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:41.615032 containerd[1470]: time="2025-03-17T17:30:41.614958554Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:30:41.615919 containerd[1470]: time="2025-03-17T17:30:41.615892807Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:30:41.617713 containerd[1470]: time="2025-03-17T17:30:41.617681492Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.775892189s" Mar 17 17:30:41.617838 containerd[1470]: time="2025-03-17T17:30:41.617818899Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:30:41.620391 containerd[1470]: time="2025-03-17T17:30:41.620360081Z" level=info msg="CreateContainer within sandbox \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:30:41.632319 containerd[1470]: time="2025-03-17T17:30:41.632270184Z" level=info msg="CreateContainer within sandbox \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\"" Mar 17 17:30:41.632891 containerd[1470]: time="2025-03-17T17:30:41.632863640Z" level=info msg="StartContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\"" Mar 17 17:30:41.657572 kubelet[2536]: I0317 17:30:41.657447 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99fz\" (UniqueName: \"kubernetes.io/projected/527b1255-108e-4a7c-a0e9-eecbc925cda8-kube-api-access-w99fz\") pod \"coredns-6f6b679f8f-sb9cc\" (UID: \"527b1255-108e-4a7c-a0e9-eecbc925cda8\") " pod="kube-system/coredns-6f6b679f8f-sb9cc" Mar 17 17:30:41.657572 kubelet[2536]: I0317 17:30:41.657507 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8nw8\" (UniqueName: \"kubernetes.io/projected/16ed53b1-4c77-48a1-b301-1bc4900f0fec-kube-api-access-c8nw8\") pod \"coredns-6f6b679f8f-xxd2c\" (UID: \"16ed53b1-4c77-48a1-b301-1bc4900f0fec\") " pod="kube-system/coredns-6f6b679f8f-xxd2c" Mar 17 17:30:41.657572 kubelet[2536]: I0317 17:30:41.657533 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ed53b1-4c77-48a1-b301-1bc4900f0fec-config-volume\") pod \"coredns-6f6b679f8f-xxd2c\" (UID: \"16ed53b1-4c77-48a1-b301-1bc4900f0fec\") " pod="kube-system/coredns-6f6b679f8f-xxd2c" Mar 17 17:30:41.657572 kubelet[2536]: I0317 17:30:41.657570 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/527b1255-108e-4a7c-a0e9-eecbc925cda8-config-volume\") pod \"coredns-6f6b679f8f-sb9cc\" (UID: \"527b1255-108e-4a7c-a0e9-eecbc925cda8\") " pod="kube-system/coredns-6f6b679f8f-sb9cc" Mar 17 17:30:41.677814 systemd[1]: Started cri-containerd-1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45.scope - libcontainer container 1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45. Mar 17 17:30:41.707976 containerd[1470]: time="2025-03-17T17:30:41.707903590Z" level=info msg="StartContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" returns successfully" Mar 17 17:30:41.863427 kubelet[2536]: E0317 17:30:41.859224 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:41.863579 containerd[1470]: time="2025-03-17T17:30:41.859923138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sb9cc,Uid:527b1255-108e-4a7c-a0e9-eecbc925cda8,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:41.877104 kubelet[2536]: E0317 17:30:41.876794 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:41.881240 containerd[1470]: time="2025-03-17T17:30:41.877321307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xxd2c,Uid:16ed53b1-4c77-48a1-b301-1bc4900f0fec,Namespace:kube-system,Attempt:0,}" Mar 17 17:30:42.165369 kubelet[2536]: E0317 17:30:42.165257 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:42.169350 kubelet[2536]: E0317 17:30:42.168557 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:42.198857 kubelet[2536]: I0317 17:30:42.198458 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h7sgk" podStartSLOduration=6.106842007 podStartE2EDuration="17.198439764s" podCreationTimestamp="2025-03-17 17:30:25 +0000 UTC" firstStartedPulling="2025-03-17 17:30:25.749851619 +0000 UTC m=+6.776677039" lastFinishedPulling="2025-03-17 17:30:36.841449336 +0000 UTC m=+17.868274796" observedRunningTime="2025-03-17 17:30:42.194280832 +0000 UTC m=+23.221106292" watchObservedRunningTime="2025-03-17 17:30:42.198439764 +0000 UTC m=+23.225265224" Mar 17 17:30:42.199056 kubelet[2536]: I0317 17:30:42.199025 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kqmd6" podStartSLOduration=1.5067919060000001 podStartE2EDuration="17.199015193s" podCreationTimestamp="2025-03-17 17:30:25 +0000 UTC" firstStartedPulling="2025-03-17 17:30:25.926865783 +0000 UTC m=+6.953691243" lastFinishedPulling="2025-03-17 17:30:41.61908911 +0000 UTC m=+22.645914530" observedRunningTime="2025-03-17 17:30:42.17867503 +0000 UTC m=+23.205500490" watchObservedRunningTime="2025-03-17 17:30:42.199015193 +0000 UTC m=+23.225840613" Mar 17 17:30:43.169453 kubelet[2536]: E0317 17:30:43.169405 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:43.170061 kubelet[2536]: E0317 17:30:43.169695 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:44.172574 kubelet[2536]: E0317 17:30:44.172128 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:45.174151 kubelet[2536]: E0317 17:30:45.174105 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:45.688169 systemd-networkd[1397]: cilium_host: Link UP Mar 17 17:30:45.688309 systemd-networkd[1397]: cilium_net: Link UP Mar 17 17:30:45.688312 systemd-networkd[1397]: cilium_net: Gained carrier Mar 17 17:30:45.688445 systemd-networkd[1397]: cilium_host: Gained carrier Mar 17 17:30:45.747697 systemd-networkd[1397]: cilium_net: Gained IPv6LL Mar 17 17:30:45.789063 systemd-networkd[1397]: cilium_vxlan: Link UP Mar 17 17:30:45.789072 systemd-networkd[1397]: cilium_vxlan: Gained carrier Mar 17 17:30:46.122579 kernel: NET: Registered PF_ALG protocol family Mar 17 17:30:46.666188 systemd-networkd[1397]: cilium_host: Gained IPv6LL Mar 17 17:30:46.781511 systemd-networkd[1397]: lxc_health: Link UP Mar 17 17:30:46.790622 systemd-networkd[1397]: lxc_health: Gained carrier Mar 17 17:30:47.062978 systemd-networkd[1397]: lxcfffa66877eb2: Link UP Mar 17 17:30:47.066655 kernel: eth0: renamed from tmp0797f Mar 17 17:30:47.069975 systemd-networkd[1397]: lxc0d3a54e4d0ec: Link UP Mar 17 17:30:47.080864 systemd-networkd[1397]: lxcfffa66877eb2: Gained carrier Mar 17 17:30:47.081568 kernel: eth0: renamed from tmpd2f8a Mar 17 17:30:47.091377 systemd-networkd[1397]: lxc0d3a54e4d0ec: Gained carrier Mar 17 17:30:47.397341 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:47882.service - OpenSSH per-connection server daemon (10.0.0.1:47882). Mar 17 17:30:47.444824 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 47882 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:30:47.446248 sshd-session[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:47.453599 systemd-logind[1454]: New session 8 of user core. Mar 17 17:30:47.461719 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:30:47.614575 sshd[3774]: Connection closed by 10.0.0.1 port 47882 Mar 17 17:30:47.613742 sshd-session[3771]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:47.616389 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:47882.service: Deactivated successfully. Mar 17 17:30:47.619309 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:30:47.620881 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:30:47.622107 systemd-logind[1454]: Removed session 8. Mar 17 17:30:47.626689 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Mar 17 17:30:47.688465 kubelet[2536]: E0317 17:30:47.688167 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:48.137960 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 17 17:30:48.179204 kubelet[2536]: E0317 17:30:48.179166 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:48.843032 systemd-networkd[1397]: lxcfffa66877eb2: Gained IPv6LL Mar 17 17:30:48.905766 systemd-networkd[1397]: lxc0d3a54e4d0ec: Gained IPv6LL Mar 17 17:30:50.874377 containerd[1470]: time="2025-03-17T17:30:50.874279887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:50.874854 containerd[1470]: time="2025-03-17T17:30:50.874684991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:30:50.874854 containerd[1470]: time="2025-03-17T17:30:50.874786258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:50.874854 containerd[1470]: time="2025-03-17T17:30:50.874821453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:50.875008 containerd[1470]: time="2025-03-17T17:30:50.874939597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:50.875008 containerd[1470]: time="2025-03-17T17:30:50.874354556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:30:50.875452 containerd[1470]: time="2025-03-17T17:30:50.875393975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:50.875918 containerd[1470]: time="2025-03-17T17:30:50.875862431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:30:50.890173 systemd[1]: run-containerd-runc-k8s.io-d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a-runc.SyCkki.mount: Deactivated successfully. Mar 17 17:30:50.907752 systemd[1]: Started cri-containerd-0797f0a48154d0cd73ef18d0cd903f2ad23369973eb563f9356883f464aef2fc.scope - libcontainer container 0797f0a48154d0cd73ef18d0cd903f2ad23369973eb563f9356883f464aef2fc. Mar 17 17:30:50.909165 systemd[1]: Started cri-containerd-d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a.scope - libcontainer container d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a. Mar 17 17:30:50.919278 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:30:50.923472 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:30:50.936447 containerd[1470]: time="2025-03-17T17:30:50.936407075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xxd2c,Uid:16ed53b1-4c77-48a1-b301-1bc4900f0fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"0797f0a48154d0cd73ef18d0cd903f2ad23369973eb563f9356883f464aef2fc\"" Mar 17 17:30:50.939010 kubelet[2536]: E0317 17:30:50.937943 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:50.941698 containerd[1470]: time="2025-03-17T17:30:50.941652321Z" level=info msg="CreateContainer within sandbox \"0797f0a48154d0cd73ef18d0cd903f2ad23369973eb563f9356883f464aef2fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:30:50.947798 containerd[1470]: time="2025-03-17T17:30:50.947764410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sb9cc,Uid:527b1255-108e-4a7c-a0e9-eecbc925cda8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a\"" Mar 17 17:30:50.948572 kubelet[2536]: E0317 17:30:50.948410 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:50.951137 containerd[1470]: time="2025-03-17T17:30:50.951082598Z" level=info msg="CreateContainer within sandbox \"d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:30:50.963290 containerd[1470]: time="2025-03-17T17:30:50.963244224Z" level=info msg="CreateContainer within sandbox \"0797f0a48154d0cd73ef18d0cd903f2ad23369973eb563f9356883f464aef2fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8c934283703a6ea6e536a600d019f58108a682e483d622d2ffdb0f0a82f8b32\"" Mar 17 17:30:50.964803 containerd[1470]: time="2025-03-17T17:30:50.963961966Z" level=info msg="StartContainer for \"e8c934283703a6ea6e536a600d019f58108a682e483d622d2ffdb0f0a82f8b32\"" Mar 17 17:30:50.969612 containerd[1470]: time="2025-03-17T17:30:50.969516091Z" level=info msg="CreateContainer within sandbox \"d2f8a273278368a031873c18ef001928d8f4ed082dde2d6c61da5813c9eeeb7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8da4f9a0773452500cbad00f06bd3544fcfa2d7de914b454600bbd1c9b272af3\"" Mar 17 17:30:50.971326 containerd[1470]: time="2025-03-17T17:30:50.970520034Z" level=info msg="StartContainer for \"8da4f9a0773452500cbad00f06bd3544fcfa2d7de914b454600bbd1c9b272af3\"" Mar 17 17:30:50.992748 systemd[1]: Started cri-containerd-e8c934283703a6ea6e536a600d019f58108a682e483d622d2ffdb0f0a82f8b32.scope - libcontainer container e8c934283703a6ea6e536a600d019f58108a682e483d622d2ffdb0f0a82f8b32. Mar 17 17:30:50.995565 systemd[1]: Started cri-containerd-8da4f9a0773452500cbad00f06bd3544fcfa2d7de914b454600bbd1c9b272af3.scope - libcontainer container 8da4f9a0773452500cbad00f06bd3544fcfa2d7de914b454600bbd1c9b272af3. Mar 17 17:30:51.040105 containerd[1470]: time="2025-03-17T17:30:51.040053271Z" level=info msg="StartContainer for \"e8c934283703a6ea6e536a600d019f58108a682e483d622d2ffdb0f0a82f8b32\" returns successfully" Mar 17 17:30:51.040251 containerd[1470]: time="2025-03-17T17:30:51.040057910Z" level=info msg="StartContainer for \"8da4f9a0773452500cbad00f06bd3544fcfa2d7de914b454600bbd1c9b272af3\" returns successfully" Mar 17 17:30:51.189342 kubelet[2536]: E0317 17:30:51.188271 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:51.190617 kubelet[2536]: E0317 17:30:51.190291 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:51.201157 kubelet[2536]: I0317 17:30:51.201100 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sb9cc" podStartSLOduration=26.201085054 podStartE2EDuration="26.201085054s" podCreationTimestamp="2025-03-17 17:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:51.200152893 +0000 UTC m=+32.226978353" watchObservedRunningTime="2025-03-17 17:30:51.201085054 +0000 UTC m=+32.227910514" Mar 17 17:30:51.227205 kubelet[2536]: I0317 17:30:51.226436 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xxd2c" podStartSLOduration=26.226420302 podStartE2EDuration="26.226420302s" podCreationTimestamp="2025-03-17 17:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:30:51.218341693 +0000 UTC m=+32.245167153" watchObservedRunningTime="2025-03-17 17:30:51.226420302 +0000 UTC m=+32.253245722" Mar 17 17:30:52.192060 kubelet[2536]: E0317 17:30:52.191984 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:52.192060 kubelet[2536]: E0317 17:30:52.192052 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:52.624255 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:59160.service - OpenSSH per-connection server daemon (10.0.0.1:59160). Mar 17 17:30:52.669158 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 59160 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:30:52.670868 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:52.675393 systemd-logind[1454]: New session 9 of user core. Mar 17 17:30:52.697762 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:30:52.812635 sshd[3972]: Connection closed by 10.0.0.1 port 59160 Mar 17 17:30:52.812998 sshd-session[3970]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:52.816570 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:59160.service: Deactivated successfully. Mar 17 17:30:52.818292 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:30:52.818940 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:30:52.819755 systemd-logind[1454]: Removed session 9. Mar 17 17:30:53.193223 kubelet[2536]: E0317 17:30:53.193133 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:53.193223 kubelet[2536]: E0317 17:30:53.193221 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:30:57.828112 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Mar 17 17:30:57.870037 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:30:57.871389 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:57.874865 systemd-logind[1454]: New session 10 of user core. Mar 17 17:30:57.883693 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:30:57.995445 sshd[3992]: Connection closed by 10.0.0.1 port 59174 Mar 17 17:30:57.996030 sshd-session[3990]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:57.999899 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:59174.service: Deactivated successfully. Mar 17 17:30:58.002400 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:30:58.003249 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:30:58.004443 systemd-logind[1454]: Removed session 10. Mar 17 17:31:03.006490 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:59176.service - OpenSSH per-connection server daemon (10.0.0.1:59176). Mar 17 17:31:03.050143 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 59176 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:03.051567 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:03.055901 systemd-logind[1454]: New session 11 of user core. Mar 17 17:31:03.071743 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:31:03.198246 sshd[4009]: Connection closed by 10.0.0.1 port 59176 Mar 17 17:31:03.198641 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:03.203077 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:59176.service: Deactivated successfully. Mar 17 17:31:03.204806 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:31:03.206732 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:31:03.208762 systemd-logind[1454]: Removed session 11. Mar 17 17:31:08.212198 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:59188.service - OpenSSH per-connection server daemon (10.0.0.1:59188). Mar 17 17:31:08.255784 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:08.257246 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:08.261260 systemd-logind[1454]: New session 12 of user core. Mar 17 17:31:08.271776 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:31:08.392471 sshd[4024]: Connection closed by 10.0.0.1 port 59188 Mar 17 17:31:08.392951 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:08.404516 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:59188.service: Deactivated successfully. Mar 17 17:31:08.406377 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:31:08.407817 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:31:08.420177 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:59200.service - OpenSSH per-connection server daemon (10.0.0.1:59200). Mar 17 17:31:08.421205 systemd-logind[1454]: Removed session 12. Mar 17 17:31:08.465619 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 59200 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:08.466929 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:08.471116 systemd-logind[1454]: New session 13 of user core. Mar 17 17:31:08.481739 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:31:08.648571 sshd[4039]: Connection closed by 10.0.0.1 port 59200 Mar 17 17:31:08.649082 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:08.662061 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:59200.service: Deactivated successfully. Mar 17 17:31:08.665332 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:31:08.668129 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:31:08.676578 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:59210.service - OpenSSH per-connection server daemon (10.0.0.1:59210). Mar 17 17:31:08.678845 systemd-logind[1454]: Removed session 13. Mar 17 17:31:08.723171 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 59210 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:08.724666 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:08.729169 systemd-logind[1454]: New session 14 of user core. Mar 17 17:31:08.738772 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:31:08.873201 sshd[4051]: Connection closed by 10.0.0.1 port 59210 Mar 17 17:31:08.873601 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:08.877222 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:59210.service: Deactivated successfully. Mar 17 17:31:08.880474 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:31:08.881201 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:31:08.882316 systemd-logind[1454]: Removed session 14. Mar 17 17:31:13.888379 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:41424.service - OpenSSH per-connection server daemon (10.0.0.1:41424). Mar 17 17:31:13.933485 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 41424 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:13.934765 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:13.938773 systemd-logind[1454]: New session 15 of user core. Mar 17 17:31:13.952755 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:31:14.067255 sshd[4067]: Connection closed by 10.0.0.1 port 41424 Mar 17 17:31:14.067650 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:14.070925 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:41424.service: Deactivated successfully. Mar 17 17:31:14.072692 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:31:14.074737 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:31:14.075879 systemd-logind[1454]: Removed session 15. Mar 17 17:31:19.083706 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Mar 17 17:31:19.129980 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:19.131297 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:19.136849 systemd-logind[1454]: New session 16 of user core. Mar 17 17:31:19.145775 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:31:19.273912 sshd[4084]: Connection closed by 10.0.0.1 port 41438 Mar 17 17:31:19.275319 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:19.281123 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:41438.service: Deactivated successfully. Mar 17 17:31:19.283760 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:31:19.287836 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:31:19.292900 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Mar 17 17:31:19.294832 systemd-logind[1454]: Removed session 16. Mar 17 17:31:19.334216 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:19.335925 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:19.340312 systemd-logind[1454]: New session 17 of user core. Mar 17 17:31:19.358754 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:31:19.590106 sshd[4099]: Connection closed by 10.0.0.1 port 41450 Mar 17 17:31:19.592391 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:19.601367 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:41450.service: Deactivated successfully. Mar 17 17:31:19.603205 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:31:19.605153 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:31:19.606873 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:41462.service - OpenSSH per-connection server daemon (10.0.0.1:41462). Mar 17 17:31:19.607709 systemd-logind[1454]: Removed session 17. Mar 17 17:31:19.655154 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:19.656452 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:19.660616 systemd-logind[1454]: New session 18 of user core. Mar 17 17:31:19.669761 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:31:21.047704 sshd[4111]: Connection closed by 10.0.0.1 port 41462 Mar 17 17:31:21.048811 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:21.057589 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:41462.service: Deactivated successfully. Mar 17 17:31:21.061813 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:31:21.065630 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:31:21.077782 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:41464.service - OpenSSH per-connection server daemon (10.0.0.1:41464). Mar 17 17:31:21.079485 systemd-logind[1454]: Removed session 18. Mar 17 17:31:21.120527 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 41464 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:21.121093 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:21.125890 systemd-logind[1454]: New session 19 of user core. Mar 17 17:31:21.132733 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:31:21.374573 sshd[4139]: Connection closed by 10.0.0.1 port 41464 Mar 17 17:31:21.375406 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:21.385260 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:41464.service: Deactivated successfully. Mar 17 17:31:21.386901 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:31:21.388702 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:31:21.400976 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:41466.service - OpenSSH per-connection server daemon (10.0.0.1:41466). Mar 17 17:31:21.401915 systemd-logind[1454]: Removed session 19. Mar 17 17:31:21.439582 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 41466 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:21.441027 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:21.445134 systemd-logind[1454]: New session 20 of user core. Mar 17 17:31:21.458737 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:31:21.574434 sshd[4152]: Connection closed by 10.0.0.1 port 41466 Mar 17 17:31:21.574834 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:21.578187 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:41466.service: Deactivated successfully. Mar 17 17:31:21.580903 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:31:21.581798 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:31:21.582595 systemd-logind[1454]: Removed session 20. Mar 17 17:31:26.590066 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:60718.service - OpenSSH per-connection server daemon (10.0.0.1:60718). Mar 17 17:31:26.630599 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 60718 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:26.631787 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:26.635602 systemd-logind[1454]: New session 21 of user core. Mar 17 17:31:26.644715 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:31:26.760488 sshd[4171]: Connection closed by 10.0.0.1 port 60718 Mar 17 17:31:26.761133 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:26.764599 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:60718.service: Deactivated successfully. Mar 17 17:31:26.768045 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:31:26.768759 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:31:26.769794 systemd-logind[1454]: Removed session 21. Mar 17 17:31:30.039506 kubelet[2536]: E0317 17:31:30.039459 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:31.771183 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Mar 17 17:31:31.814876 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:31.816106 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:31.820360 systemd-logind[1454]: New session 22 of user core. Mar 17 17:31:31.826759 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:31:31.933930 sshd[4185]: Connection closed by 10.0.0.1 port 60732 Mar 17 17:31:31.934472 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:31.939281 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:60732.service: Deactivated successfully. Mar 17 17:31:31.940835 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:31:31.941428 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:31:31.942385 systemd-logind[1454]: Removed session 22. Mar 17 17:31:36.039445 kubelet[2536]: E0317 17:31:36.039399 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:36.948246 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:51986.service - OpenSSH per-connection server daemon (10.0.0.1:51986). Mar 17 17:31:36.989829 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 51986 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:36.991224 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.994969 systemd-logind[1454]: New session 23 of user core. Mar 17 17:31:37.004738 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:31:37.115249 sshd[4199]: Connection closed by 10.0.0.1 port 51986 Mar 17 17:31:37.115612 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:37.129181 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:51986.service: Deactivated successfully. Mar 17 17:31:37.130709 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:31:37.133766 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:31:37.147898 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:52000.service - OpenSSH per-connection server daemon (10.0.0.1:52000). Mar 17 17:31:37.148820 systemd-logind[1454]: Removed session 23. Mar 17 17:31:37.188278 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 52000 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:37.189698 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:37.194124 systemd-logind[1454]: New session 24 of user core. Mar 17 17:31:37.203712 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:31:39.382706 containerd[1470]: time="2025-03-17T17:31:39.382406235Z" level=info msg="StopContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" with timeout 30 (s)" Mar 17 17:31:39.383240 containerd[1470]: time="2025-03-17T17:31:39.383181707Z" level=info msg="Stop container \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" with signal terminated" Mar 17 17:31:39.394164 systemd[1]: run-containerd-runc-k8s.io-1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7-runc.c2mg3t.mount: Deactivated successfully. Mar 17 17:31:39.397753 systemd[1]: cri-containerd-1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45.scope: Deactivated successfully. Mar 17 17:31:39.414406 containerd[1470]: time="2025-03-17T17:31:39.414308344Z" level=info msg="StopContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" with timeout 2 (s)" Mar 17 17:31:39.415364 containerd[1470]: time="2025-03-17T17:31:39.415338813Z" level=info msg="Stop container \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" with signal terminated" Mar 17 17:31:39.415602 containerd[1470]: time="2025-03-17T17:31:39.415424172Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:31:39.422589 systemd-networkd[1397]: lxc_health: Link DOWN Mar 17 17:31:39.422599 systemd-networkd[1397]: lxc_health: Lost carrier Mar 17 17:31:39.432046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45-rootfs.mount: Deactivated successfully. Mar 17 17:31:39.440953 containerd[1470]: time="2025-03-17T17:31:39.440825028Z" level=info msg="shim disconnected" id=1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45 namespace=k8s.io Mar 17 17:31:39.440953 containerd[1470]: time="2025-03-17T17:31:39.440879027Z" level=warning msg="cleaning up after shim disconnected" id=1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45 namespace=k8s.io Mar 17 17:31:39.440953 containerd[1470]: time="2025-03-17T17:31:39.440887507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:39.446374 systemd[1]: cri-containerd-1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7.scope: Deactivated successfully. Mar 17 17:31:39.446887 systemd[1]: cri-containerd-1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7.scope: Consumed 6.881s CPU time. Mar 17 17:31:39.462493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7-rootfs.mount: Deactivated successfully. Mar 17 17:31:39.466652 containerd[1470]: time="2025-03-17T17:31:39.466557880Z" level=info msg="shim disconnected" id=1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7 namespace=k8s.io Mar 17 17:31:39.466652 containerd[1470]: time="2025-03-17T17:31:39.466645599Z" level=warning msg="cleaning up after shim disconnected" id=1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7 namespace=k8s.io Mar 17 17:31:39.466652 containerd[1470]: time="2025-03-17T17:31:39.466654559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:39.488516 containerd[1470]: time="2025-03-17T17:31:39.488472772Z" level=info msg="StopContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" returns successfully" Mar 17 17:31:39.489335 containerd[1470]: time="2025-03-17T17:31:39.489243404Z" level=info msg="StopContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" returns successfully" Mar 17 17:31:39.490526 containerd[1470]: time="2025-03-17T17:31:39.490498711Z" level=info msg="StopPodSandbox for \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\"" Mar 17 17:31:39.490638 containerd[1470]: time="2025-03-17T17:31:39.490537631Z" level=info msg="Container to stop \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.490638 containerd[1470]: time="2025-03-17T17:31:39.490628790Z" level=info msg="Container to stop \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.490638 containerd[1470]: time="2025-03-17T17:31:39.490637070Z" level=info msg="Container to stop \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.490714 containerd[1470]: time="2025-03-17T17:31:39.490648549Z" level=info msg="Container to stop \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.490714 containerd[1470]: time="2025-03-17T17:31:39.490656789Z" level=info msg="Container to stop \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.491433 containerd[1470]: time="2025-03-17T17:31:39.491360822Z" level=info msg="StopPodSandbox for \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\"" Mar 17 17:31:39.491433 containerd[1470]: time="2025-03-17T17:31:39.491406902Z" level=info msg="Container to stop \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:39.493228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c-shm.mount: Deactivated successfully. Mar 17 17:31:39.493461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105-shm.mount: Deactivated successfully. Mar 17 17:31:39.495840 systemd[1]: cri-containerd-7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105.scope: Deactivated successfully. Mar 17 17:31:39.500336 systemd[1]: cri-containerd-e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c.scope: Deactivated successfully. Mar 17 17:31:39.537460 containerd[1470]: time="2025-03-17T17:31:39.537397543Z" level=info msg="shim disconnected" id=7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105 namespace=k8s.io Mar 17 17:31:39.537460 containerd[1470]: time="2025-03-17T17:31:39.537454143Z" level=warning msg="cleaning up after shim disconnected" id=7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105 namespace=k8s.io Mar 17 17:31:39.537460 containerd[1470]: time="2025-03-17T17:31:39.537461982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:39.565100 containerd[1470]: time="2025-03-17T17:31:39.565050335Z" level=info msg="TearDown network for sandbox \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" successfully" Mar 17 17:31:39.565100 containerd[1470]: time="2025-03-17T17:31:39.565093695Z" level=info msg="StopPodSandbox for \"7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105\" returns successfully" Mar 17 17:31:39.569658 containerd[1470]: time="2025-03-17T17:31:39.569603288Z" level=info msg="shim disconnected" id=e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c namespace=k8s.io Mar 17 17:31:39.569658 containerd[1470]: time="2025-03-17T17:31:39.569657328Z" level=warning msg="cleaning up after shim disconnected" id=e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c namespace=k8s.io Mar 17 17:31:39.569834 containerd[1470]: time="2025-03-17T17:31:39.569669367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:39.581832 containerd[1470]: time="2025-03-17T17:31:39.581705842Z" level=info msg="TearDown network for sandbox \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\" successfully" Mar 17 17:31:39.581832 containerd[1470]: time="2025-03-17T17:31:39.581742722Z" level=info msg="StopPodSandbox for \"e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c\" returns successfully" Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705291 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-hostproc\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705338 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-bpf-maps\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705363 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-hubble-tls\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705421 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-lib-modules\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705438 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-net\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706182 kubelet[2536]: I0317 17:31:39.705454 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-xtables-lock\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705474 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b217fce8-6cdf-4c9f-8548-8c66257ee38e-cilium-config-path\") pod \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\" (UID: \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705492 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9b858bb-8376-486b-86d0-106d3671368c-cilium-config-path\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705509 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmh8t\" (UniqueName: \"kubernetes.io/projected/b217fce8-6cdf-4c9f-8548-8c66257ee38e-kube-api-access-lmh8t\") pod \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\" (UID: \"b217fce8-6cdf-4c9f-8548-8c66257ee38e\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705525 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-cgroup\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705568 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnhtp\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-kube-api-access-jnhtp\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706691 kubelet[2536]: I0317 17:31:39.705586 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-etc-cni-netd\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706822 kubelet[2536]: I0317 17:31:39.705602 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-kernel\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706822 kubelet[2536]: I0317 17:31:39.705623 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9b858bb-8376-486b-86d0-106d3671368c-clustermesh-secrets\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706822 kubelet[2536]: I0317 17:31:39.705637 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cni-path\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.706822 kubelet[2536]: I0317 17:31:39.705652 2536 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-run\") pod \"c9b858bb-8376-486b-86d0-106d3671368c\" (UID: \"c9b858bb-8376-486b-86d0-106d3671368c\") " Mar 17 17:31:39.709644 kubelet[2536]: I0317 17:31:39.709309 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-hostproc" (OuterVolumeSpecName: "hostproc") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.709644 kubelet[2536]: I0317 17:31:39.709313 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.709644 kubelet[2536]: I0317 17:31:39.709386 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.709644 kubelet[2536]: I0317 17:31:39.709403 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.710197 kubelet[2536]: I0317 17:31:39.710161 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.710275 kubelet[2536]: I0317 17:31:39.710220 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.710713 kubelet[2536]: I0317 17:31:39.710677 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.710778 kubelet[2536]: I0317 17:31:39.710719 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.710778 kubelet[2536]: I0317 17:31:39.710739 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.711459 kubelet[2536]: I0317 17:31:39.711422 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b217fce8-6cdf-4c9f-8548-8c66257ee38e-kube-api-access-lmh8t" (OuterVolumeSpecName: "kube-api-access-lmh8t") pod "b217fce8-6cdf-4c9f-8548-8c66257ee38e" (UID: "b217fce8-6cdf-4c9f-8548-8c66257ee38e"). InnerVolumeSpecName "kube-api-access-lmh8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:31:39.711554 kubelet[2536]: I0317 17:31:39.711521 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:31:39.711594 kubelet[2536]: I0317 17:31:39.711585 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cni-path" (OuterVolumeSpecName: "cni-path") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:31:39.713093 kubelet[2536]: I0317 17:31:39.713052 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b858bb-8376-486b-86d0-106d3671368c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:31:39.713303 kubelet[2536]: I0317 17:31:39.713261 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-kube-api-access-jnhtp" (OuterVolumeSpecName: "kube-api-access-jnhtp") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "kube-api-access-jnhtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:31:39.713428 kubelet[2536]: I0317 17:31:39.713381 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b858bb-8376-486b-86d0-106d3671368c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c9b858bb-8376-486b-86d0-106d3671368c" (UID: "c9b858bb-8376-486b-86d0-106d3671368c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:31:39.714514 kubelet[2536]: I0317 17:31:39.714473 2536 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217fce8-6cdf-4c9f-8548-8c66257ee38e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b217fce8-6cdf-4c9f-8548-8c66257ee38e" (UID: "b217fce8-6cdf-4c9f-8548-8c66257ee38e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:31:39.806634 kubelet[2536]: I0317 17:31:39.806586 2536 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jnhtp\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-kube-api-access-jnhtp\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806634 kubelet[2536]: I0317 17:31:39.806624 2536 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806634 kubelet[2536]: I0317 17:31:39.806636 2536 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806634 kubelet[2536]: I0317 17:31:39.806644 2536 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9b858bb-8376-486b-86d0-106d3671368c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806652 2536 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806660 2536 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806667 2536 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806675 2536 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806684 2536 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806691 2536 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806698 2536 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9b858bb-8376-486b-86d0-106d3671368c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.806830 kubelet[2536]: I0317 17:31:39.806705 2536 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.807049 kubelet[2536]: I0317 17:31:39.806716 2536 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b217fce8-6cdf-4c9f-8548-8c66257ee38e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.807049 kubelet[2536]: I0317 17:31:39.806723 2536 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9b858bb-8376-486b-86d0-106d3671368c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.807049 kubelet[2536]: I0317 17:31:39.806730 2536 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9b858bb-8376-486b-86d0-106d3671368c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:39.807049 kubelet[2536]: I0317 17:31:39.806737 2536 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lmh8t\" (UniqueName: \"kubernetes.io/projected/b217fce8-6cdf-4c9f-8548-8c66257ee38e-kube-api-access-lmh8t\") on node \"localhost\" DevicePath \"\"" Mar 17 17:31:40.300516 kubelet[2536]: I0317 17:31:40.300388 2536 scope.go:117] "RemoveContainer" containerID="1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45" Mar 17 17:31:40.302692 containerd[1470]: time="2025-03-17T17:31:40.302372090Z" level=info msg="RemoveContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\"" Mar 17 17:31:40.320153 systemd[1]: Removed slice kubepods-besteffort-podb217fce8_6cdf_4c9f_8548_8c66257ee38e.slice - libcontainer container kubepods-besteffort-podb217fce8_6cdf_4c9f_8548_8c66257ee38e.slice. Mar 17 17:31:40.322192 systemd[1]: Removed slice kubepods-burstable-podc9b858bb_8376_486b_86d0_106d3671368c.slice - libcontainer container kubepods-burstable-podc9b858bb_8376_486b_86d0_106d3671368c.slice. Mar 17 17:31:40.322287 systemd[1]: kubepods-burstable-podc9b858bb_8376_486b_86d0_106d3671368c.slice: Consumed 7.045s CPU time. Mar 17 17:31:40.329082 containerd[1470]: time="2025-03-17T17:31:40.328916299Z" level=info msg="RemoveContainer for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" returns successfully" Mar 17 17:31:40.329265 kubelet[2536]: I0317 17:31:40.329236 2536 scope.go:117] "RemoveContainer" containerID="1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45" Mar 17 17:31:40.329575 containerd[1470]: time="2025-03-17T17:31:40.329476134Z" level=error msg="ContainerStatus for \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\": not found" Mar 17 17:31:40.333484 kubelet[2536]: E0317 17:31:40.333427 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\": not found" containerID="1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45" Mar 17 17:31:40.333604 kubelet[2536]: I0317 17:31:40.333487 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45"} err="failed to get container status \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\": rpc error: code = NotFound desc = an error occurred when try to find container \"1610fcd61e75cd7902062b93745b84d7519436b4905120bda71ba6ae719a0e45\": not found" Mar 17 17:31:40.333604 kubelet[2536]: I0317 17:31:40.333590 2536 scope.go:117] "RemoveContainer" containerID="1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7" Mar 17 17:31:40.335692 containerd[1470]: time="2025-03-17T17:31:40.335096676Z" level=info msg="RemoveContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\"" Mar 17 17:31:40.346350 containerd[1470]: time="2025-03-17T17:31:40.346120724Z" level=info msg="RemoveContainer for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" returns successfully" Mar 17 17:31:40.346771 kubelet[2536]: I0317 17:31:40.346659 2536 scope.go:117] "RemoveContainer" containerID="f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95" Mar 17 17:31:40.348086 containerd[1470]: time="2025-03-17T17:31:40.348055024Z" level=info msg="RemoveContainer for \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\"" Mar 17 17:31:40.353708 containerd[1470]: time="2025-03-17T17:31:40.353675767Z" level=info msg="RemoveContainer for \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\" returns successfully" Mar 17 17:31:40.353939 kubelet[2536]: I0317 17:31:40.353896 2536 scope.go:117] "RemoveContainer" containerID="072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa" Mar 17 17:31:40.354987 containerd[1470]: time="2025-03-17T17:31:40.354942194Z" level=info msg="RemoveContainer for \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\"" Mar 17 17:31:40.357708 containerd[1470]: time="2025-03-17T17:31:40.357670086Z" level=info msg="RemoveContainer for \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\" returns successfully" Mar 17 17:31:40.357927 kubelet[2536]: I0317 17:31:40.357905 2536 scope.go:117] "RemoveContainer" containerID="1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68" Mar 17 17:31:40.358915 containerd[1470]: time="2025-03-17T17:31:40.358891394Z" level=info msg="RemoveContainer for \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\"" Mar 17 17:31:40.361191 containerd[1470]: time="2025-03-17T17:31:40.361154371Z" level=info msg="RemoveContainer for \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\" returns successfully" Mar 17 17:31:40.361350 kubelet[2536]: I0317 17:31:40.361318 2536 scope.go:117] "RemoveContainer" containerID="18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545" Mar 17 17:31:40.362391 containerd[1470]: time="2025-03-17T17:31:40.362365518Z" level=info msg="RemoveContainer for \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\"" Mar 17 17:31:40.364997 containerd[1470]: time="2025-03-17T17:31:40.364962292Z" level=info msg="RemoveContainer for \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\" returns successfully" Mar 17 17:31:40.365197 kubelet[2536]: I0317 17:31:40.365173 2536 scope.go:117] "RemoveContainer" containerID="1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7" Mar 17 17:31:40.365567 containerd[1470]: time="2025-03-17T17:31:40.365523886Z" level=error msg="ContainerStatus for \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\": not found" Mar 17 17:31:40.365695 kubelet[2536]: E0317 17:31:40.365670 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\": not found" containerID="1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7" Mar 17 17:31:40.365729 kubelet[2536]: I0317 17:31:40.365703 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7"} err="failed to get container status \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b92cb949d2a1d90967992940d733dfbdd233f087cd9ce04a2cac218c4ee32a7\": not found" Mar 17 17:31:40.365729 kubelet[2536]: I0317 17:31:40.365727 2536 scope.go:117] "RemoveContainer" containerID="f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95" Mar 17 17:31:40.366018 containerd[1470]: time="2025-03-17T17:31:40.365905922Z" level=error msg="ContainerStatus for \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\": not found" Mar 17 17:31:40.366079 kubelet[2536]: E0317 17:31:40.366031 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\": not found" containerID="f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95" Mar 17 17:31:40.366079 kubelet[2536]: I0317 17:31:40.366065 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95"} err="failed to get container status \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\": rpc error: code = NotFound desc = an error occurred when try to find container \"f123408039dcbbacba398f758134b2362a60e5b56d112add43c38847b9e13c95\": not found" Mar 17 17:31:40.366134 kubelet[2536]: I0317 17:31:40.366083 2536 scope.go:117] "RemoveContainer" containerID="072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa" Mar 17 17:31:40.366290 containerd[1470]: time="2025-03-17T17:31:40.366247319Z" level=error msg="ContainerStatus for \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\": not found" Mar 17 17:31:40.366569 kubelet[2536]: E0317 17:31:40.366421 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\": not found" containerID="072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa" Mar 17 17:31:40.366569 kubelet[2536]: I0317 17:31:40.366466 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa"} err="failed to get container status \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"072a6cd2c384a144d18b0f396566a185b72cea1e7ed2ecf9934c98b9149f9eaa\": not found" Mar 17 17:31:40.366569 kubelet[2536]: I0317 17:31:40.366483 2536 scope.go:117] "RemoveContainer" containerID="1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68" Mar 17 17:31:40.366903 containerd[1470]: time="2025-03-17T17:31:40.366670955Z" level=error msg="ContainerStatus for \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\": not found" Mar 17 17:31:40.366961 kubelet[2536]: E0317 17:31:40.366797 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\": not found" containerID="1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68" Mar 17 17:31:40.366961 kubelet[2536]: I0317 17:31:40.366820 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68"} err="failed to get container status \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a0506dcbebcf21d506b66c893c5e0ec8bca070a97f5507e21fab398906dfc68\": not found" Mar 17 17:31:40.366961 kubelet[2536]: I0317 17:31:40.366842 2536 scope.go:117] "RemoveContainer" containerID="18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545" Mar 17 17:31:40.367038 containerd[1470]: time="2025-03-17T17:31:40.366978351Z" level=error msg="ContainerStatus for \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\": not found" Mar 17 17:31:40.367153 kubelet[2536]: E0317 17:31:40.367101 2536 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\": not found" containerID="18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545" Mar 17 17:31:40.367191 kubelet[2536]: I0317 17:31:40.367161 2536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545"} err="failed to get container status \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\": rpc error: code = NotFound desc = an error occurred when try to find container \"18fc7cab4f303a629feb4ea8892ea6d4f65f3fa8068ec57b8f042d355a4d0545\": not found" Mar 17 17:31:40.390137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e95ea00d0755b71e2249e86bccf33ae2703d5d950d37cd4dd65ad76150643b4c-rootfs.mount: Deactivated successfully. Mar 17 17:31:40.390233 systemd[1]: var-lib-kubelet-pods-b217fce8\x2d6cdf\x2d4c9f\x2d8548\x2d8c66257ee38e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmh8t.mount: Deactivated successfully. Mar 17 17:31:40.390289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b28b1b235f930e9f40b966d7230c915b6f8bb10dcacf6dbcaddd45ad48e6105-rootfs.mount: Deactivated successfully. Mar 17 17:31:40.390333 systemd[1]: var-lib-kubelet-pods-c9b858bb\x2d8376\x2d486b\x2d86d0\x2d106d3671368c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djnhtp.mount: Deactivated successfully. Mar 17 17:31:40.390388 systemd[1]: var-lib-kubelet-pods-c9b858bb\x2d8376\x2d486b\x2d86d0\x2d106d3671368c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:31:40.390437 systemd[1]: var-lib-kubelet-pods-c9b858bb\x2d8376\x2d486b\x2d86d0\x2d106d3671368c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:31:41.041852 kubelet[2536]: I0317 17:31:41.041798 2536 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b217fce8-6cdf-4c9f-8548-8c66257ee38e" path="/var/lib/kubelet/pods/b217fce8-6cdf-4c9f-8548-8c66257ee38e/volumes" Mar 17 17:31:41.042222 kubelet[2536]: I0317 17:31:41.042199 2536 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b858bb-8376-486b-86d0-106d3671368c" path="/var/lib/kubelet/pods/c9b858bb-8376-486b-86d0-106d3671368c/volumes" Mar 17 17:31:41.324513 sshd[4213]: Connection closed by 10.0.0.1 port 52000 Mar 17 17:31:41.325012 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:41.335200 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:52000.service: Deactivated successfully. Mar 17 17:31:41.337280 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:31:41.337678 systemd[1]: session-24.scope: Consumed 1.505s CPU time. Mar 17 17:31:41.339082 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:31:41.340586 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Mar 17 17:31:41.341289 systemd-logind[1454]: Removed session 24. Mar 17 17:31:41.389504 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:41.390943 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:41.395233 systemd-logind[1454]: New session 25 of user core. Mar 17 17:31:41.403740 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:31:42.432510 sshd[4374]: Connection closed by 10.0.0.1 port 52002 Mar 17 17:31:42.433041 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:42.445114 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:52002.service: Deactivated successfully. Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449744 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="apply-sysctl-overwrites" Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449776 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="mount-bpf-fs" Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449784 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="clean-cilium-state" Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449790 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="mount-cgroup" Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449796 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="cilium-agent" Mar 17 17:31:42.450125 kubelet[2536]: E0317 17:31:42.449801 2536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b217fce8-6cdf-4c9f-8548-8c66257ee38e" containerName="cilium-operator" Mar 17 17:31:42.450125 kubelet[2536]: I0317 17:31:42.449825 2536 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b858bb-8376-486b-86d0-106d3671368c" containerName="cilium-agent" Mar 17 17:31:42.450125 kubelet[2536]: I0317 17:31:42.449832 2536 memory_manager.go:354] "RemoveStaleState removing state" podUID="b217fce8-6cdf-4c9f-8548-8c66257ee38e" containerName="cilium-operator" Mar 17 17:31:42.449962 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:31:42.453459 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:31:42.464096 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:52010.service - OpenSSH per-connection server daemon (10.0.0.1:52010). Mar 17 17:31:42.464497 systemd-logind[1454]: Removed session 25. Mar 17 17:31:42.479508 systemd[1]: Created slice kubepods-burstable-podba035a88_2214_40f1_9b91_9d2699cfe226.slice - libcontainer container kubepods-burstable-podba035a88_2214_40f1_9b91_9d2699cfe226.slice. Mar 17 17:31:42.516293 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 52010 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:42.517693 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:42.521706 kubelet[2536]: I0317 17:31:42.521221 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-host-proc-sys-kernel\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521706 kubelet[2536]: I0317 17:31:42.521261 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-lib-modules\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521706 kubelet[2536]: I0317 17:31:42.521282 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba035a88-2214-40f1-9b91-9d2699cfe226-cilium-ipsec-secrets\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521706 kubelet[2536]: I0317 17:31:42.521300 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-cni-path\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521706 kubelet[2536]: I0317 17:31:42.521316 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba035a88-2214-40f1-9b91-9d2699cfe226-clustermesh-secrets\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521330 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5vq7\" (UniqueName: \"kubernetes.io/projected/ba035a88-2214-40f1-9b91-9d2699cfe226-kube-api-access-l5vq7\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521345 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-cilium-cgroup\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521364 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-host-proc-sys-net\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521391 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba035a88-2214-40f1-9b91-9d2699cfe226-cilium-config-path\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521407 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba035a88-2214-40f1-9b91-9d2699cfe226-hubble-tls\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.521897 kubelet[2536]: I0317 17:31:42.521424 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-etc-cni-netd\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.522024 kubelet[2536]: I0317 17:31:42.521440 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-bpf-maps\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.522024 kubelet[2536]: I0317 17:31:42.521456 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-hostproc\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.522024 kubelet[2536]: I0317 17:31:42.521472 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-xtables-lock\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.522024 kubelet[2536]: I0317 17:31:42.521491 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba035a88-2214-40f1-9b91-9d2699cfe226-cilium-run\") pod \"cilium-6cpjs\" (UID: \"ba035a88-2214-40f1-9b91-9d2699cfe226\") " pod="kube-system/cilium-6cpjs" Mar 17 17:31:42.522434 systemd-logind[1454]: New session 26 of user core. Mar 17 17:31:42.527761 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:31:42.579663 sshd[4387]: Connection closed by 10.0.0.1 port 52010 Mar 17 17:31:42.580360 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:42.591135 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:52010.service: Deactivated successfully. Mar 17 17:31:42.593994 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:31:42.595269 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:31:42.606905 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:38012.service - OpenSSH per-connection server daemon (10.0.0.1:38012). Mar 17 17:31:42.608001 systemd-logind[1454]: Removed session 26. Mar 17 17:31:42.653110 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 38012 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:31:42.654435 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:42.659482 systemd-logind[1454]: New session 27 of user core. Mar 17 17:31:42.670914 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:31:42.784117 kubelet[2536]: E0317 17:31:42.784030 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:42.784698 containerd[1470]: time="2025-03-17T17:31:42.784523285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6cpjs,Uid:ba035a88-2214-40f1-9b91-9d2699cfe226,Namespace:kube-system,Attempt:0,}" Mar 17 17:31:42.804919 containerd[1470]: time="2025-03-17T17:31:42.804781687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:31:42.804919 containerd[1470]: time="2025-03-17T17:31:42.804898406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:31:42.804919 containerd[1470]: time="2025-03-17T17:31:42.804922086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:42.805480 containerd[1470]: time="2025-03-17T17:31:42.805438921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:42.826764 systemd[1]: Started cri-containerd-647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e.scope - libcontainer container 647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e. Mar 17 17:31:42.849382 containerd[1470]: time="2025-03-17T17:31:42.849338652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6cpjs,Uid:ba035a88-2214-40f1-9b91-9d2699cfe226,Namespace:kube-system,Attempt:0,} returns sandbox id \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\"" Mar 17 17:31:42.850461 kubelet[2536]: E0317 17:31:42.850347 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:42.854498 containerd[1470]: time="2025-03-17T17:31:42.854460721Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:31:42.867748 containerd[1470]: time="2025-03-17T17:31:42.867691272Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601\"" Mar 17 17:31:42.868335 containerd[1470]: time="2025-03-17T17:31:42.868204907Z" level=info msg="StartContainer for \"e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601\"" Mar 17 17:31:42.892738 systemd[1]: Started cri-containerd-e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601.scope - libcontainer container e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601. Mar 17 17:31:42.918144 containerd[1470]: time="2025-03-17T17:31:42.918009100Z" level=info msg="StartContainer for \"e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601\" returns successfully" Mar 17 17:31:42.940981 systemd[1]: cri-containerd-e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601.scope: Deactivated successfully. Mar 17 17:31:42.969844 containerd[1470]: time="2025-03-17T17:31:42.969671355Z" level=info msg="shim disconnected" id=e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601 namespace=k8s.io Mar 17 17:31:42.969844 containerd[1470]: time="2025-03-17T17:31:42.969740314Z" level=warning msg="cleaning up after shim disconnected" id=e663a6d3c5c198f05ee2e382d11f5d30ff81c4f6ce61a2dab44eceffbc86d601 namespace=k8s.io Mar 17 17:31:42.969844 containerd[1470]: time="2025-03-17T17:31:42.969748314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:43.317003 kubelet[2536]: E0317 17:31:43.316666 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:43.321302 containerd[1470]: time="2025-03-17T17:31:43.321238138Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:31:43.337032 containerd[1470]: time="2025-03-17T17:31:43.336962707Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5\"" Mar 17 17:31:43.338686 containerd[1470]: time="2025-03-17T17:31:43.337572141Z" level=info msg="StartContainer for \"be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5\"" Mar 17 17:31:43.361716 systemd[1]: Started cri-containerd-be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5.scope - libcontainer container be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5. Mar 17 17:31:43.385859 containerd[1470]: time="2025-03-17T17:31:43.385740240Z" level=info msg="StartContainer for \"be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5\" returns successfully" Mar 17 17:31:43.393374 systemd[1]: cri-containerd-be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5.scope: Deactivated successfully. Mar 17 17:31:43.422334 containerd[1470]: time="2025-03-17T17:31:43.422259129Z" level=info msg="shim disconnected" id=be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5 namespace=k8s.io Mar 17 17:31:43.422735 containerd[1470]: time="2025-03-17T17:31:43.422534207Z" level=warning msg="cleaning up after shim disconnected" id=be2217ea6a68d64ee0cbbf741794a5c1a9970da089ebb473ff674e90424168d5 namespace=k8s.io Mar 17 17:31:43.422735 containerd[1470]: time="2025-03-17T17:31:43.422577326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:44.108671 kubelet[2536]: E0317 17:31:44.108621 2536 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:31:44.319373 kubelet[2536]: E0317 17:31:44.319259 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:44.324786 containerd[1470]: time="2025-03-17T17:31:44.322349281Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:31:44.349925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444864411.mount: Deactivated successfully. Mar 17 17:31:44.352473 containerd[1470]: time="2025-03-17T17:31:44.352421038Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee\"" Mar 17 17:31:44.353075 containerd[1470]: time="2025-03-17T17:31:44.353041952Z" level=info msg="StartContainer for \"a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee\"" Mar 17 17:31:44.384763 systemd[1]: Started cri-containerd-a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee.scope - libcontainer container a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee. Mar 17 17:31:44.410984 containerd[1470]: time="2025-03-17T17:31:44.410889729Z" level=info msg="StartContainer for \"a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee\" returns successfully" Mar 17 17:31:44.411247 systemd[1]: cri-containerd-a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee.scope: Deactivated successfully. Mar 17 17:31:44.434214 containerd[1470]: time="2025-03-17T17:31:44.434150670Z" level=info msg="shim disconnected" id=a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee namespace=k8s.io Mar 17 17:31:44.434214 containerd[1470]: time="2025-03-17T17:31:44.434204389Z" level=warning msg="cleaning up after shim disconnected" id=a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee namespace=k8s.io Mar 17 17:31:44.434214 containerd[1470]: time="2025-03-17T17:31:44.434213149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:44.626651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8b234a0b5e7ffc96f9225cfee51f8e764ab69ace1faaf5a3faed587137982ee-rootfs.mount: Deactivated successfully. Mar 17 17:31:45.323781 kubelet[2536]: E0317 17:31:45.323732 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:45.327348 containerd[1470]: time="2025-03-17T17:31:45.327292335Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:31:45.349257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499647175.mount: Deactivated successfully. Mar 17 17:31:45.352374 containerd[1470]: time="2025-03-17T17:31:45.352253265Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3\"" Mar 17 17:31:45.353609 containerd[1470]: time="2025-03-17T17:31:45.352774500Z" level=info msg="StartContainer for \"7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3\"" Mar 17 17:31:45.377716 systemd[1]: Started cri-containerd-7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3.scope - libcontainer container 7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3. Mar 17 17:31:45.397843 systemd[1]: cri-containerd-7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3.scope: Deactivated successfully. Mar 17 17:31:45.402082 containerd[1470]: time="2025-03-17T17:31:45.402007126Z" level=info msg="StartContainer for \"7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3\" returns successfully" Mar 17 17:31:45.421745 containerd[1470]: time="2025-03-17T17:31:45.421641105Z" level=info msg="shim disconnected" id=7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3 namespace=k8s.io Mar 17 17:31:45.421745 containerd[1470]: time="2025-03-17T17:31:45.421706865Z" level=warning msg="cleaning up after shim disconnected" id=7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3 namespace=k8s.io Mar 17 17:31:45.421745 containerd[1470]: time="2025-03-17T17:31:45.421715745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:45.626678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7594e99267b6bf922aec7a29a4f30c28151c557a19318aa6053e2bf98ebc53b3-rootfs.mount: Deactivated successfully. Mar 17 17:31:46.326536 kubelet[2536]: E0317 17:31:46.326503 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:46.331082 containerd[1470]: time="2025-03-17T17:31:46.331029902Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:31:46.344953 containerd[1470]: time="2025-03-17T17:31:46.344832858Z" level=info msg="CreateContainer within sandbox \"647d25ae55180cb66c67cd9f5dd9d73354346a908aadd018d57fcc670af3276e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c35f8745dbebca57f5c33688044b26c6c1fe1446293310fab3460662e48c0988\"" Mar 17 17:31:46.346995 containerd[1470]: time="2025-03-17T17:31:46.346147046Z" level=info msg="StartContainer for \"c35f8745dbebca57f5c33688044b26c6c1fe1446293310fab3460662e48c0988\"" Mar 17 17:31:46.389781 systemd[1]: Started cri-containerd-c35f8745dbebca57f5c33688044b26c6c1fe1446293310fab3460662e48c0988.scope - libcontainer container c35f8745dbebca57f5c33688044b26c6c1fe1446293310fab3460662e48c0988. Mar 17 17:31:46.416526 containerd[1470]: time="2025-03-17T17:31:46.416394491Z" level=info msg="StartContainer for \"c35f8745dbebca57f5c33688044b26c6c1fe1446293310fab3460662e48c0988\" returns successfully" Mar 17 17:31:46.688572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:31:47.332564 kubelet[2536]: E0317 17:31:47.331073 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:47.387862 kubelet[2536]: I0317 17:31:47.387784 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6cpjs" podStartSLOduration=5.387768295 podStartE2EDuration="5.387768295s" podCreationTimestamp="2025-03-17 17:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:31:47.386984742 +0000 UTC m=+88.413810202" watchObservedRunningTime="2025-03-17 17:31:47.387768295 +0000 UTC m=+88.414593755" Mar 17 17:31:48.786577 kubelet[2536]: E0317 17:31:48.785035 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:49.611639 systemd-networkd[1397]: lxc_health: Link UP Mar 17 17:31:49.619595 systemd-networkd[1397]: lxc_health: Gained carrier Mar 17 17:31:50.040331 kubelet[2536]: E0317 17:31:50.040276 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:50.787067 kubelet[2536]: E0317 17:31:50.786992 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:51.339051 kubelet[2536]: E0317 17:31:51.338825 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:51.497775 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 17 17:31:52.040258 kubelet[2536]: E0317 17:31:52.040201 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:31:55.538673 sshd[4400]: Connection closed by 10.0.0.1 port 38012 Mar 17 17:31:55.539608 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:55.543733 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:38012.service: Deactivated successfully. Mar 17 17:31:55.548588 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:31:55.551094 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:31:55.552200 systemd-logind[1454]: Removed session 27.