Mar 17 17:27:43.907664 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:27:43.907685 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:27:43.907695 kernel: KASLR enabled Mar 17 17:27:43.907701 kernel: efi: EFI v2.7 by EDK II Mar 17 17:27:43.907707 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Mar 17 17:27:43.907712 kernel: random: crng init done Mar 17 17:27:43.907719 kernel: secureboot: Secure boot disabled Mar 17 17:27:43.907725 kernel: ACPI: Early table checksum verification disabled Mar 17 17:27:43.907731 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:27:43.907739 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:27:43.907745 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907751 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907757 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907771 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907778 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907787 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907794 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907800 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907806 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:27:43.907813 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:27:43.907819 kernel: NUMA: Failed to initialise from firmware Mar 17 17:27:43.907825 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:27:43.907831 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:27:43.907838 kernel: Zone ranges: Mar 17 17:27:43.907844 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:27:43.907852 kernel: DMA32 empty Mar 17 17:27:43.907858 kernel: Normal empty Mar 17 17:27:43.907864 kernel: Movable zone start for each node Mar 17 17:27:43.907870 kernel: Early memory node ranges Mar 17 17:27:43.907876 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Mar 17 17:27:43.907883 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:27:43.907889 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:27:43.907895 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:27:43.907902 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:27:43.907908 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:27:43.907914 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:27:43.907921 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:27:43.907983 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:27:43.907990 kernel: psci: probing for conduit method from ACPI. Mar 17 17:27:43.907996 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:27:43.908006 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:27:43.908013 kernel: psci: Trusted OS migration not required Mar 17 17:27:43.908019 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:27:43.908027 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:27:43.908034 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:27:43.908041 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:27:43.908048 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:27:43.908055 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:27:43.908061 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:27:43.908068 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:27:43.908075 kernel: CPU features: detected: Spectre-v4 Mar 17 17:27:43.908081 kernel: CPU features: detected: Spectre-BHB Mar 17 17:27:43.908088 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:27:43.908096 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:27:43.908102 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:27:43.908109 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:27:43.908116 kernel: alternatives: applying boot alternatives Mar 17 17:27:43.908124 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:27:43.908131 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:27:43.908137 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:27:43.908144 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:27:43.908151 kernel: Fallback order for Node 0: 0 Mar 17 17:27:43.908157 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:27:43.908164 kernel: Policy zone: DMA Mar 17 17:27:43.908172 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:27:43.908178 kernel: software IO TLB: area num 4. Mar 17 17:27:43.908185 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:27:43.908192 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) Mar 17 17:27:43.908199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:27:43.908206 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:27:43.908213 kernel: rcu: RCU event tracing is enabled. Mar 17 17:27:43.908220 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:27:43.908227 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:27:43.908233 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:27:43.908240 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:27:43.908247 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:27:43.908255 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:27:43.908262 kernel: GICv3: 256 SPIs implemented Mar 17 17:27:43.908268 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:27:43.908275 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:27:43.908282 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:27:43.908288 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:27:43.908295 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:27:43.908302 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:27:43.908309 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:27:43.908315 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:27:43.908322 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:27:43.908330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:27:43.908337 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:27:43.908344 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:27:43.908351 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:27:43.908358 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:27:43.908365 kernel: arm-pv: using stolen time PV Mar 17 17:27:43.908372 kernel: Console: colour dummy device 80x25 Mar 17 17:27:43.908379 kernel: ACPI: Core revision 20230628 Mar 17 17:27:43.908386 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:27:43.908393 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:27:43.908401 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:27:43.908408 kernel: landlock: Up and running. Mar 17 17:27:43.908415 kernel: SELinux: Initializing. Mar 17 17:27:43.908421 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:27:43.908428 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:27:43.908435 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:27:43.908442 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:27:43.908449 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:27:43.908456 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:27:43.908464 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:27:43.908471 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:27:43.908478 kernel: Remapping and enabling EFI services. Mar 17 17:27:43.908485 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:27:43.908492 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:27:43.908499 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:27:43.908506 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:27:43.908513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:27:43.908519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:27:43.908526 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:27:43.908534 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:27:43.908542 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:27:43.908553 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:27:43.908561 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:27:43.908568 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:27:43.908575 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:27:43.908582 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:27:43.908590 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:27:43.908597 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:27:43.908605 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:27:43.908612 kernel: SMP: Total of 4 processors activated. Mar 17 17:27:43.908620 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:27:43.908627 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:27:43.908634 kernel: CPU features: detected: Common not Private translations Mar 17 17:27:43.908642 kernel: CPU features: detected: CRC32 instructions Mar 17 17:27:43.908649 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:27:43.908656 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:27:43.908665 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:27:43.908672 kernel: CPU features: detected: Privileged Access Never Mar 17 17:27:43.908679 kernel: CPU features: detected: RAS Extension Support Mar 17 17:27:43.908686 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:27:43.908694 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:27:43.908701 kernel: alternatives: applying system-wide alternatives Mar 17 17:27:43.908708 kernel: devtmpfs: initialized Mar 17 17:27:43.908715 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:27:43.908723 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:27:43.908731 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:27:43.908738 kernel: SMBIOS 3.0.0 present. Mar 17 17:27:43.908746 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:27:43.908753 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:27:43.908764 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:27:43.908773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:27:43.908781 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:27:43.908788 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:27:43.908795 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:27:43.908804 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:27:43.908811 kernel: cpuidle: using governor menu Mar 17 17:27:43.908818 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:27:43.908826 kernel: ASID allocator initialised with 32768 entries Mar 17 17:27:43.908833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:27:43.908840 kernel: Serial: AMBA PL011 UART driver Mar 17 17:27:43.908847 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:27:43.908854 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:27:43.908862 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:27:43.908870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:27:43.908878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:27:43.908885 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:27:43.908892 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:27:43.908900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:27:43.908907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:27:43.908914 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:27:43.908922 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:27:43.908933 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:27:43.908942 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:27:43.908950 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:27:43.908982 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:27:43.908989 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:27:43.908997 kernel: ACPI: Interpreter enabled Mar 17 17:27:43.909004 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:27:43.909011 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:27:43.909018 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:27:43.909026 kernel: printk: console [ttyAMA0] enabled Mar 17 17:27:43.909033 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:27:43.909165 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:27:43.909239 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:27:43.909302 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:27:43.909371 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:27:43.909433 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:27:43.909443 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:27:43.909450 kernel: PCI host bridge to bus 0000:00 Mar 17 17:27:43.909520 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:27:43.909577 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:27:43.909632 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:27:43.909688 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:27:43.909771 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:27:43.909849 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:27:43.909916 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:27:43.909993 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:27:43.910058 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:27:43.910121 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:27:43.910196 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:27:43.910264 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:27:43.910330 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:27:43.910397 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:27:43.910461 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:27:43.910473 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:27:43.910480 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:27:43.910487 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:27:43.910495 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:27:43.910502 kernel: iommu: Default domain type: Translated Mar 17 17:27:43.910510 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:27:43.910519 kernel: efivars: Registered efivars operations Mar 17 17:27:43.910526 kernel: vgaarb: loaded Mar 17 17:27:43.910533 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:27:43.910540 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:27:43.910547 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:27:43.910554 kernel: pnp: PnP ACPI init Mar 17 17:27:43.910641 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:27:43.910651 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:27:43.910658 kernel: NET: Registered PF_INET protocol family Mar 17 17:27:43.910667 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:27:43.910674 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:27:43.910681 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:27:43.910688 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:27:43.910695 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:27:43.910702 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:27:43.910710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:27:43.910717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:27:43.910725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:27:43.910732 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:27:43.910739 kernel: kvm [1]: HYP mode not available Mar 17 17:27:43.910746 kernel: Initialise system trusted keyrings Mar 17 17:27:43.910753 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:27:43.910829 kernel: Key type asymmetric registered Mar 17 17:27:43.910839 kernel: Asymmetric key parser 'x509' registered Mar 17 17:27:43.910846 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:27:43.910854 kernel: io scheduler mq-deadline registered Mar 17 17:27:43.910865 kernel: io scheduler kyber registered Mar 17 17:27:43.910873 kernel: io scheduler bfq registered Mar 17 17:27:43.910880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:27:43.910888 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:27:43.910896 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:27:43.911024 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:27:43.911036 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:27:43.911043 kernel: thunder_xcv, ver 1.0 Mar 17 17:27:43.911050 kernel: thunder_bgx, ver 1.0 Mar 17 17:27:43.911058 kernel: nicpf, ver 1.0 Mar 17 17:27:43.911068 kernel: nicvf, ver 1.0 Mar 17 17:27:43.911147 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:27:43.911211 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:27:43 UTC (1742232463) Mar 17 17:27:43.911221 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:27:43.911228 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:27:43.911236 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:27:43.911243 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:27:43.911252 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:27:43.911260 kernel: Segment Routing with IPv6 Mar 17 17:27:43.911267 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:27:43.911274 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:27:43.911282 kernel: Key type dns_resolver registered Mar 17 17:27:43.911289 kernel: registered taskstats version 1 Mar 17 17:27:43.911296 kernel: Loading compiled-in X.509 certificates Mar 17 17:27:43.911304 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:27:43.911311 kernel: Key type .fscrypt registered Mar 17 17:27:43.911318 kernel: Key type fscrypt-provisioning registered Mar 17 17:27:43.911327 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:27:43.911335 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:27:43.911342 kernel: ima: No architecture policies found Mar 17 17:27:43.911349 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:27:43.911356 kernel: clk: Disabling unused clocks Mar 17 17:27:43.911364 kernel: Freeing unused kernel memory: 39744K Mar 17 17:27:43.911371 kernel: Run /init as init process Mar 17 17:27:43.911378 kernel: with arguments: Mar 17 17:27:43.911387 kernel: /init Mar 17 17:27:43.911394 kernel: with environment: Mar 17 17:27:43.911401 kernel: HOME=/ Mar 17 17:27:43.911408 kernel: TERM=linux Mar 17 17:27:43.911415 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:27:43.911424 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:27:43.911433 systemd[1]: Detected virtualization kvm. Mar 17 17:27:43.911441 systemd[1]: Detected architecture arm64. Mar 17 17:27:43.911450 systemd[1]: Running in initrd. Mar 17 17:27:43.911457 systemd[1]: No hostname configured, using default hostname. Mar 17 17:27:43.911465 systemd[1]: Hostname set to <localhost>. Mar 17 17:27:43.911473 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:27:43.911481 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:27:43.911489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:27:43.911497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:27:43.911505 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:27:43.911515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:27:43.911523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:27:43.911531 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:27:43.911540 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:27:43.911548 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:27:43.911556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:27:43.911564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:27:43.911573 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:27:43.911581 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:27:43.911589 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:27:43.911597 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:27:43.911605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:27:43.911613 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:27:43.911620 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:27:43.911628 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:27:43.911636 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:27:43.911645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:27:43.911653 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:27:43.911661 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:27:43.911669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:27:43.911677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:27:43.911685 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:27:43.911692 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:27:43.911700 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:27:43.911709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:27:43.911717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:27:43.911725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:27:43.911733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:27:43.911744 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:27:43.911756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:27:43.911775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:27:43.911802 systemd-journald[240]: Collecting audit messages is disabled. Mar 17 17:27:43.911823 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:27:43.911831 systemd-journald[240]: Journal started Mar 17 17:27:43.911850 systemd-journald[240]: Runtime Journal (/run/log/journal/4fde67ce3df6479188c83eda625c21f5) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:27:43.917015 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:27:43.895596 systemd-modules-load[241]: Inserted module 'overlay' Mar 17 17:27:43.920579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:27:43.920605 kernel: Bridge firewalling registered Mar 17 17:27:43.921037 systemd-modules-load[241]: Inserted module 'br_netfilter' Mar 17 17:27:43.922913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:27:43.925781 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:27:43.925994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:27:43.929851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:27:43.932075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:27:43.933154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:27:43.937692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:27:43.940741 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:27:43.945961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:27:43.947052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:27:43.950306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:27:43.956743 dracut-cmdline[276]: dracut-dracut-053 Mar 17 17:27:43.959524 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:27:43.987212 systemd-resolved[284]: Positive Trust Anchors: Mar 17 17:27:43.987285 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:27:43.987320 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:27:43.991884 systemd-resolved[284]: Defaulting to hostname 'linux'. Mar 17 17:27:43.994127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:27:43.995089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:27:44.033953 kernel: SCSI subsystem initialized Mar 17 17:27:44.037943 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:27:44.045954 kernel: iscsi: registered transport (tcp) Mar 17 17:27:44.057975 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:27:44.058004 kernel: QLogic iSCSI HBA Driver Mar 17 17:27:44.097088 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:27:44.108090 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:27:44.123815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:27:44.123869 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:27:44.125072 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:27:44.171971 kernel: raid6: neonx8 gen() 15780 MB/s Mar 17 17:27:44.188948 kernel: raid6: neonx4 gen() 15641 MB/s Mar 17 17:27:44.205963 kernel: raid6: neonx2 gen() 13226 MB/s Mar 17 17:27:44.222947 kernel: raid6: neonx1 gen() 10463 MB/s Mar 17 17:27:44.239956 kernel: raid6: int64x8 gen() 6958 MB/s Mar 17 17:27:44.256945 kernel: raid6: int64x4 gen() 7344 MB/s Mar 17 17:27:44.273952 kernel: raid6: int64x2 gen() 6125 MB/s Mar 17 17:27:44.290950 kernel: raid6: int64x1 gen() 5055 MB/s Mar 17 17:27:44.290983 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Mar 17 17:27:44.307964 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Mar 17 17:27:44.307989 kernel: raid6: using neon recovery algorithm Mar 17 17:27:44.312945 kernel: xor: measuring software checksum speed Mar 17 17:27:44.312965 kernel: 8regs : 19802 MB/sec Mar 17 17:27:44.312975 kernel: 32regs : 18486 MB/sec Mar 17 17:27:44.314302 kernel: arm64_neon : 27070 MB/sec Mar 17 17:27:44.314321 kernel: xor: using function: arm64_neon (27070 MB/sec) Mar 17 17:27:44.362953 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:27:44.373124 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:27:44.387141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:27:44.398416 systemd-udevd[463]: Using default interface naming scheme 'v255'. Mar 17 17:27:44.402227 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:27:44.412230 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:27:44.424438 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Mar 17 17:27:44.449954 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:27:44.459118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:27:44.499103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:27:44.508094 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:27:44.521625 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:27:44.524514 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:27:44.525667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:27:44.527293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:27:44.534077 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:27:44.544948 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:27:44.560556 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:27:44.560657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:27:44.560674 kernel: GPT:9289727 != 19775487 Mar 17 17:27:44.560684 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:27:44.560695 kernel: GPT:9289727 != 19775487 Mar 17 17:27:44.560705 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:27:44.560714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:27:44.545403 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:27:44.557128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:27:44.557233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:27:44.559958 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:27:44.560749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:27:44.560880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:27:44.576020 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (510) Mar 17 17:27:44.563139 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:27:44.577227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:27:44.580951 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) Mar 17 17:27:44.591955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:27:44.596735 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:27:44.601481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:27:44.605154 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:27:44.606138 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:27:44.611691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:27:44.622088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:27:44.623684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:27:44.629536 disk-uuid[552]: Primary Header is updated. Mar 17 17:27:44.629536 disk-uuid[552]: Secondary Entries is updated. Mar 17 17:27:44.629536 disk-uuid[552]: Secondary Header is updated. Mar 17 17:27:44.632948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:27:44.647195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:27:45.645221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:27:45.645343 disk-uuid[553]: The operation has completed successfully. Mar 17 17:27:45.666822 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:27:45.666918 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:27:45.690093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:27:45.693809 sh[574]: Success Mar 17 17:27:45.709155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:27:45.741175 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:27:45.757412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:27:45.761162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:27:45.773542 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:27:45.773593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:27:45.773603 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:27:45.773613 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:27:45.774943 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:27:45.781538 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:27:45.782802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:27:45.794088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:27:45.795490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:27:45.805300 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:27:45.805344 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:27:45.805355 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:27:45.808961 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:27:45.817062 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:27:45.818279 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:27:45.827664 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:27:45.833098 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:27:45.892306 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:27:45.904129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:27:45.928444 ignition[677]: Ignition 2.20.0 Mar 17 17:27:45.928454 ignition[677]: Stage: fetch-offline Mar 17 17:27:45.928489 ignition[677]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:45.928497 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:45.928707 ignition[677]: parsed url from cmdline: "" Mar 17 17:27:45.928711 ignition[677]: no config URL provided Mar 17 17:27:45.928716 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:27:45.928723 ignition[677]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:27:45.928756 ignition[677]: op(1): [started] loading QEMU firmware config module Mar 17 17:27:45.928761 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:27:45.935004 ignition[677]: op(1): [finished] loading QEMU firmware config module Mar 17 17:27:45.938169 systemd-networkd[766]: lo: Link UP Mar 17 17:27:45.938178 systemd-networkd[766]: lo: Gained carrier Mar 17 17:27:45.938840 systemd-networkd[766]: Enumeration completed Mar 17 17:27:45.939088 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:27:45.940570 systemd[1]: Reached target network.target - Network. Mar 17 17:27:45.942061 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:27:45.942064 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:27:45.942849 systemd-networkd[766]: eth0: Link UP Mar 17 17:27:45.942852 systemd-networkd[766]: eth0: Gained carrier Mar 17 17:27:45.942858 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:27:45.957967 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:27:45.981492 ignition[677]: parsing config with SHA512: 689a7db068c9b4d45e35fd8a037df9ca1b096ac58f7e3b85f69c1d5a188b9665e325d7ae055b19ff78be31c3af1854db7854612984f4d75fb3e092d518702252 Mar 17 17:27:45.988445 unknown[677]: fetched base config from "system" Mar 17 17:27:45.988455 unknown[677]: fetched user config from "qemu" Mar 17 17:27:45.988949 ignition[677]: fetch-offline: fetch-offline passed Mar 17 17:27:45.989020 ignition[677]: Ignition finished successfully Mar 17 17:27:45.990497 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:27:45.991805 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:27:46.000086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:27:46.009582 ignition[772]: Ignition 2.20.0 Mar 17 17:27:46.009592 ignition[772]: Stage: kargs Mar 17 17:27:46.009736 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:46.009754 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:46.010667 ignition[772]: kargs: kargs passed Mar 17 17:27:46.010709 ignition[772]: Ignition finished successfully Mar 17 17:27:46.013507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:27:46.023134 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:27:46.032389 ignition[781]: Ignition 2.20.0 Mar 17 17:27:46.032398 ignition[781]: Stage: disks Mar 17 17:27:46.032557 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:46.032567 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:46.035561 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:27:46.033506 ignition[781]: disks: disks passed Mar 17 17:27:46.036504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:27:46.033547 ignition[781]: Ignition finished successfully Mar 17 17:27:46.037300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:27:46.038829 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:27:46.039875 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:27:46.041257 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:27:46.050051 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:27:46.059757 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:27:46.063356 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:27:46.066147 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:27:46.107949 kernel: EXT4-fs (vda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:27:46.108166 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:27:46.109198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:27:46.125047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:27:46.126543 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:27:46.127423 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:27:46.127461 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:27:46.127483 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:27:46.133937 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Mar 17 17:27:46.132709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:27:46.135529 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:27:46.139643 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:27:46.139660 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:27:46.139670 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:27:46.139679 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:27:46.141324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:27:46.174883 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:27:46.177799 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:27:46.180733 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:27:46.183626 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:27:46.250833 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:27:46.262019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:27:46.263276 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:27:46.267959 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:27:46.283600 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:27:46.284972 ignition[915]: INFO : Ignition 2.20.0 Mar 17 17:27:46.284972 ignition[915]: INFO : Stage: mount Mar 17 17:27:46.284972 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:46.284972 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:46.284972 ignition[915]: INFO : mount: mount passed Mar 17 17:27:46.284972 ignition[915]: INFO : Ignition finished successfully Mar 17 17:27:46.286157 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:27:46.301053 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:27:46.408496 systemd-resolved[284]: Detected conflict on linux IN A 10.0.0.72 Mar 17 17:27:46.408512 systemd-resolved[284]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Mar 17 17:27:46.772047 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:27:46.786111 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:27:46.790944 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Mar 17 17:27:46.792607 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:27:46.792628 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:27:46.793124 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:27:46.794941 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:27:46.796064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:27:46.811650 ignition[946]: INFO : Ignition 2.20.0 Mar 17 17:27:46.811650 ignition[946]: INFO : Stage: files Mar 17 17:27:46.813003 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:46.813003 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:46.813003 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:27:46.815757 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:27:46.815757 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:27:46.815757 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:27:46.815757 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:27:46.815757 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:27:46.815599 unknown[946]: wrote ssh authorized keys file for user: core Mar 17 17:27:46.821766 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:27:46.821766 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:27:46.821766 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:27:46.821766 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:27:46.864290 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:27:47.040293 systemd-networkd[766]: eth0: Gained IPv6LL Mar 17 17:27:47.137266 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:27:47.137266 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:27:47.140106 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:27:47.489972 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 17:27:47.574704 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:27:47.574704 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:27:47.577453 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:27:47.740925 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 17:27:48.001228 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:27:48.001228 ignition[946]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 17 17:27:48.004254 ignition[946]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:27:48.038499 ignition[946]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:27:48.043090 ignition[946]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:27:48.045344 ignition[946]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:27:48.045344 ignition[946]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:27:48.045344 ignition[946]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:27:48.045344 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:27:48.045344 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:27:48.045344 ignition[946]: INFO : files: files passed Mar 17 17:27:48.045344 ignition[946]: INFO : Ignition finished successfully Mar 17 17:27:48.046138 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:27:48.058202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:27:48.060768 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:27:48.064987 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:27:48.065099 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:27:48.070439 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:27:48.074068 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:27:48.074068 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:27:48.076683 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:27:48.077906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:27:48.079540 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:27:48.088095 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:27:48.109999 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:27:48.110116 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:27:48.112334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:27:48.114061 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:27:48.115710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:27:48.116618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:27:48.132987 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:27:48.135691 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:27:48.147682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:27:48.149842 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:27:48.151211 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:27:48.152866 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:27:48.153023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:27:48.155184 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:27:48.157000 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:27:48.158442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:27:48.159994 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:27:48.161851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:27:48.163684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:27:48.165416 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:27:48.167319 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:27:48.169156 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:27:48.170823 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:27:48.172290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:27:48.172430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:27:48.174632 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:27:48.176500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:27:48.178363 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:27:48.178989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:27:48.180355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:27:48.180489 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:27:48.183022 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:27:48.183142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:27:48.185033 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:27:48.186512 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:27:48.186618 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:27:48.188452 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:27:48.190172 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:27:48.191710 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:27:48.191815 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:27:48.193409 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:27:48.193485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:27:48.195428 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:27:48.195548 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:27:48.197228 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:27:48.197334 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:27:48.216167 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:27:48.217147 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:27:48.217299 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:27:48.223264 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:27:48.225282 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:27:48.225528 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:27:48.227195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:27:48.227314 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:27:48.231211 ignition[1002]: INFO : Ignition 2.20.0 Mar 17 17:27:48.231211 ignition[1002]: INFO : Stage: umount Mar 17 17:27:48.231211 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:27:48.231211 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:27:48.234152 ignition[1002]: INFO : umount: umount passed Mar 17 17:27:48.234152 ignition[1002]: INFO : Ignition finished successfully Mar 17 17:27:48.244227 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:27:48.244334 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:27:48.246890 systemd[1]: Stopped target network.target - Network. Mar 17 17:27:48.247906 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:27:48.248071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:27:48.249520 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:27:48.249576 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:27:48.251525 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:27:48.251580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:27:48.253417 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:27:48.253475 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:27:48.255645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:27:48.257330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:27:48.259817 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:27:48.259985 systemd-networkd[766]: eth0: DHCPv6 lease lost Mar 17 17:27:48.260463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:27:48.260553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:27:48.263097 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:27:48.263191 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:27:48.265629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:27:48.265697 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:27:48.279139 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:27:48.280138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:27:48.280228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:27:48.282239 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:27:48.285825 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:27:48.286136 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:27:48.290106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:27:48.290214 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:27:48.291181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:27:48.291229 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:27:48.292553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:27:48.292599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:27:48.297082 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:27:48.297205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:27:48.301482 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:27:48.301642 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:27:48.303440 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:27:48.303479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:27:48.305592 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:27:48.305632 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:27:48.306973 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:27:48.307020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:27:48.309193 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:27:48.309248 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:27:48.310863 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:27:48.310912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:27:48.331159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:27:48.332034 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:27:48.332104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:27:48.333772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:27:48.333818 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:27:48.335805 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:27:48.336961 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:27:48.337909 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:27:48.338005 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:27:48.340240 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:27:48.341432 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:27:48.341501 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:27:48.343878 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:27:48.354114 systemd[1]: Switching root. Mar 17 17:27:48.382974 systemd-journald[240]: Journal stopped Mar 17 17:27:49.246389 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Mar 17 17:27:49.246445 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:27:49.246463 kernel: SELinux: policy capability open_perms=1 Mar 17 17:27:49.246473 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:27:49.246486 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:27:49.246495 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:27:49.246505 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:27:49.246515 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:27:49.246525 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:27:49.246535 kernel: audit: type=1403 audit(1742232468.604:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:27:49.246548 systemd[1]: Successfully loaded SELinux policy in 35.129ms. Mar 17 17:27:49.246565 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.926ms. Mar 17 17:27:49.246578 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:27:49.246589 systemd[1]: Detected virtualization kvm. Mar 17 17:27:49.246600 systemd[1]: Detected architecture arm64. Mar 17 17:27:49.246611 systemd[1]: Detected first boot. Mar 17 17:27:49.246621 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:27:49.246632 zram_generator::config[1065]: No configuration found. Mar 17 17:27:49.246644 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:27:49.246655 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:27:49.246666 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:27:49.246679 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:27:49.246690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:27:49.246700 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:27:49.246711 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:27:49.246730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:27:49.246742 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:27:49.247184 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:27:49.247212 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:27:49.247394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:27:49.247460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:27:49.247473 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:27:49.247484 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:27:49.247496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:27:49.247507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:27:49.247518 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:27:49.247528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:27:49.247539 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:27:49.247554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:27:49.247625 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:27:49.247649 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:27:49.247660 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:27:49.247670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:27:49.247680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:27:49.247692 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:27:49.247702 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:27:49.247724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:27:49.247742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:27:49.247753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:27:49.247764 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:27:49.247775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:27:49.247785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:27:49.247795 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:27:49.247806 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:27:49.247816 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:27:49.247828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:27:49.247839 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:27:49.247850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:27:49.247861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:27:49.247872 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:27:49.247882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:27:49.247895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:27:49.247905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:27:49.247915 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:27:49.247938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:27:49.247951 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:27:49.247961 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:27:49.247973 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:27:49.247983 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:27:49.247993 kernel: ACPI: bus type drm_connector registered Mar 17 17:27:49.248004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:27:49.248015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:27:49.248027 kernel: fuse: init (API version 7.39) Mar 17 17:27:49.248037 kernel: loop: module loaded Mar 17 17:27:49.248047 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:27:49.248058 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:27:49.248069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:27:49.248080 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:27:49.248090 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:27:49.248101 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:27:49.248181 systemd-journald[1144]: Collecting audit messages is disabled. Mar 17 17:27:49.248240 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:27:49.248256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:27:49.248269 systemd-journald[1144]: Journal started Mar 17 17:27:49.248293 systemd-journald[1144]: Runtime Journal (/run/log/journal/4fde67ce3df6479188c83eda625c21f5) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:27:49.250233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:27:49.253508 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:27:49.254648 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:27:49.256286 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:27:49.256476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:27:49.257821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:27:49.258009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:27:49.259339 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:27:49.259504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:27:49.260831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:27:49.261016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:27:49.262209 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:27:49.262382 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:27:49.263774 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:27:49.263997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:27:49.265303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:27:49.266827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:27:49.268243 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:27:49.282397 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:27:49.297055 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:27:49.299345 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:27:49.300487 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:27:49.303699 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:27:49.306062 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:27:49.307014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:27:49.310178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:27:49.311170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:27:49.312667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:27:49.319022 systemd-journald[1144]: Time spent on flushing to /var/log/journal/4fde67ce3df6479188c83eda625c21f5 is 14.609ms for 849 entries. Mar 17 17:27:49.319022 systemd-journald[1144]: System Journal (/var/log/journal/4fde67ce3df6479188c83eda625c21f5) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:27:49.339277 systemd-journald[1144]: Received client request to flush runtime journal. Mar 17 17:27:49.317321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:27:49.324414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:27:49.328185 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:27:49.329609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:27:49.336777 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:27:49.338346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:27:49.343371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:27:49.350702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:27:49.352755 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:27:49.359988 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:27:49.365574 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Mar 17 17:27:49.365593 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Mar 17 17:27:49.371045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:27:49.381256 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:27:49.411977 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:27:49.423314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:27:49.436256 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Mar 17 17:27:49.436589 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Mar 17 17:27:49.441061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:27:49.854765 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:27:49.863100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:27:49.883473 systemd-udevd[1227]: Using default interface naming scheme 'v255'. Mar 17 17:27:49.898254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:27:49.907134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:27:49.931504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:27:49.936822 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 17 17:27:49.947474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1244) Mar 17 17:27:49.996370 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:27:50.011776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:27:50.040238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:27:50.049382 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:27:50.052995 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:27:50.066699 systemd-networkd[1237]: lo: Link UP Mar 17 17:27:50.066718 systemd-networkd[1237]: lo: Gained carrier Mar 17 17:27:50.067680 systemd-networkd[1237]: Enumeration completed Mar 17 17:27:50.067984 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:27:50.068218 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:27:50.068226 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:27:50.069155 systemd-networkd[1237]: eth0: Link UP Mar 17 17:27:50.069164 systemd-networkd[1237]: eth0: Gained carrier Mar 17 17:27:50.069180 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:27:50.069839 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:27:50.074165 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:27:50.081505 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:27:50.092741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:27:50.097484 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:27:50.099112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:27:50.114147 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:27:50.118137 lvm[1273]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:27:50.145550 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:27:50.147170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:27:50.148482 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:27:50.148519 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:27:50.149609 systemd[1]: Reached target machines.target - Containers. Mar 17 17:27:50.152043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:27:50.165126 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:27:50.167810 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:27:50.169070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:27:50.170106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:27:50.172649 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:27:50.177519 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:27:50.179731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:27:50.194114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:27:50.195440 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:27:50.197186 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:27:50.200095 kernel: loop0: detected capacity change from 0 to 113536 Mar 17 17:27:50.212969 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:27:50.253971 kernel: loop1: detected capacity change from 0 to 116808 Mar 17 17:27:50.297960 kernel: loop2: detected capacity change from 0 to 194096 Mar 17 17:27:50.344968 kernel: loop3: detected capacity change from 0 to 113536 Mar 17 17:27:50.358950 kernel: loop4: detected capacity change from 0 to 116808 Mar 17 17:27:50.370967 kernel: loop5: detected capacity change from 0 to 194096 Mar 17 17:27:50.380247 (sd-merge)[1296]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:27:50.380743 (sd-merge)[1296]: Merged extensions into '/usr'. Mar 17 17:27:50.384631 systemd[1]: Reloading requested from client PID 1281 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:27:50.384646 systemd[1]: Reloading... Mar 17 17:27:50.432010 zram_generator::config[1324]: No configuration found. Mar 17 17:27:50.466507 ldconfig[1277]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:27:50.530304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:27:50.572502 systemd[1]: Reloading finished in 187 ms. Mar 17 17:27:50.586892 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:27:50.588260 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:27:50.606127 systemd[1]: Starting ensure-sysext.service... Mar 17 17:27:50.608205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:27:50.612658 systemd[1]: Reloading requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:27:50.612677 systemd[1]: Reloading... Mar 17 17:27:50.625402 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:27:50.625665 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:27:50.626293 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:27:50.626502 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Mar 17 17:27:50.626555 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Mar 17 17:27:50.628760 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:27:50.628774 systemd-tmpfiles[1366]: Skipping /boot Mar 17 17:27:50.635778 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:27:50.635794 systemd-tmpfiles[1366]: Skipping /boot Mar 17 17:27:50.652412 zram_generator::config[1392]: No configuration found. Mar 17 17:27:50.749579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:27:50.791751 systemd[1]: Reloading finished in 178 ms. Mar 17 17:27:50.807806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:27:50.822453 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:27:50.825325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:27:50.828206 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:27:50.831430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:27:50.836149 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:27:50.843446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:27:50.845082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:27:50.848226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:27:50.855389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:27:50.856715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:27:50.857468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:27:50.857653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:27:50.861850 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:27:50.863791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:27:50.863957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:27:50.865608 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:27:50.865832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:27:50.874206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:27:50.881283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:27:50.883513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:27:50.888154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:27:50.890103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:27:50.896897 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:27:50.899892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:27:50.904492 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:27:50.905202 augenrules[1483]: No rules Mar 17 17:27:50.905870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:27:50.911272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:27:50.913010 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:27:50.913244 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:27:50.914405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:27:50.914562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:27:50.916039 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:27:50.916332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:27:50.916531 systemd-resolved[1441]: Positive Trust Anchors: Mar 17 17:27:50.916643 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:27:50.916675 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:27:50.922805 systemd-resolved[1441]: Defaulting to hostname 'linux'. Mar 17 17:27:50.925472 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:27:50.926624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:27:50.928573 systemd[1]: Reached target network.target - Network. Mar 17 17:27:50.929396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:27:50.947215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:27:50.948035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:27:50.949505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:27:50.951650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:27:50.956007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:27:50.960257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:27:50.962119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:27:50.962277 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:27:50.963718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:27:50.963883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:27:50.965289 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:27:50.965436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:27:50.966461 augenrules[1498]: /sbin/augenrules: No change Mar 17 17:27:50.966634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:27:50.966791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:27:50.968312 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:27:50.968559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:27:50.972272 systemd[1]: Finished ensure-sysext.service. Mar 17 17:27:50.975479 augenrules[1527]: No rules Mar 17 17:27:50.976341 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:27:50.976575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:27:50.978379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:27:50.978470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:27:50.990107 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:27:51.036562 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:27:51.037602 systemd-timesyncd[1537]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:27:51.037664 systemd-timesyncd[1537]: Initial clock synchronization to Mon 2025-03-17 17:27:51.339349 UTC. Mar 17 17:27:51.037965 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:27:51.038838 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:27:51.039806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:27:51.040753 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:27:51.041693 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:27:51.041735 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:27:51.042407 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:27:51.043302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:27:51.044204 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:27:51.045097 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:27:51.046763 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:27:51.049297 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:27:51.051204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:27:51.061051 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:27:51.061892 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:27:51.062637 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:27:51.063508 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:27:51.063561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:27:51.063583 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:27:51.065029 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:27:51.067173 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:27:51.069288 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:27:51.073127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:27:51.074205 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:27:51.077566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:27:51.082116 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:27:51.085044 jq[1543]: false Mar 17 17:27:51.090210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:27:51.094511 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:27:51.102066 extend-filesystems[1545]: Found loop3 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found loop4 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found loop5 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda1 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda2 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda3 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found usr Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda4 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda6 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda7 Mar 17 17:27:51.102066 extend-filesystems[1545]: Found vda9 Mar 17 17:27:51.102066 extend-filesystems[1545]: Checking size of /dev/vda9 Mar 17 17:27:51.134267 extend-filesystems[1545]: Resized partition /dev/vda9 Mar 17 17:27:51.102997 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:27:51.114389 dbus-daemon[1542]: [system] SELinux support is enabled Mar 17 17:27:51.145994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1235) Mar 17 17:27:51.146024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:27:51.112733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:27:51.146224 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:27:51.118121 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:27:51.122427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:27:51.125436 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:27:51.146081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:27:51.146352 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:27:51.146667 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:27:51.146909 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:27:51.153416 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:27:51.153675 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:27:51.166315 jq[1569]: true Mar 17 17:27:51.167891 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:27:51.186709 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:27:51.186748 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:27:51.187334 jq[1583]: true Mar 17 17:27:51.188356 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:27:51.188388 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:27:51.191229 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:27:51.194606 tar[1573]: linux-arm64/helm Mar 17 17:27:51.195819 systemd-logind[1559]: New seat seat0. Mar 17 17:27:51.198357 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:27:51.203965 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:27:51.339670 update_engine[1565]: I20250317 17:27:51.206749 1565 main.cc:92] Flatcar Update Engine starting Mar 17 17:27:51.339670 update_engine[1565]: I20250317 17:27:51.210942 1565 update_check_scheduler.cc:74] Next update check in 3m33s Mar 17 17:27:51.209130 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:27:51.211583 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:27:51.220456 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:27:51.341090 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:27:51.341090 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:27:51.341090 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:27:51.350092 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Mar 17 17:27:51.351480 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:27:51.341989 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:27:51.342237 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:27:51.353118 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:27:51.356614 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:27:51.377546 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:27:51.464904 containerd[1575]: time="2025-03-17T17:27:51.464738240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:27:51.499059 containerd[1575]: time="2025-03-17T17:27:51.498792320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500425 containerd[1575]: time="2025-03-17T17:27:51.500380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500425 containerd[1575]: time="2025-03-17T17:27:51.500423440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:27:51.500511 containerd[1575]: time="2025-03-17T17:27:51.500443800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:27:51.500700 containerd[1575]: time="2025-03-17T17:27:51.500605720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:27:51.500700 containerd[1575]: time="2025-03-17T17:27:51.500631600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500700 containerd[1575]: time="2025-03-17T17:27:51.500688080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500785 containerd[1575]: time="2025-03-17T17:27:51.500711600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500970 containerd[1575]: time="2025-03-17T17:27:51.500922040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:27:51.500970 containerd[1575]: time="2025-03-17T17:27:51.500965640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501049 containerd[1575]: time="2025-03-17T17:27:51.500981040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501049 containerd[1575]: time="2025-03-17T17:27:51.500991320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501085 containerd[1575]: time="2025-03-17T17:27:51.501066360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501303 containerd[1575]: time="2025-03-17T17:27:51.501266240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501471 containerd[1575]: time="2025-03-17T17:27:51.501434680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:27:51.501471 containerd[1575]: time="2025-03-17T17:27:51.501456880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:27:51.501560 containerd[1575]: time="2025-03-17T17:27:51.501543200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:27:51.501606 containerd[1575]: time="2025-03-17T17:27:51.501594440Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:27:51.507310 containerd[1575]: time="2025-03-17T17:27:51.507271520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:27:51.507418 containerd[1575]: time="2025-03-17T17:27:51.507335120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:27:51.507418 containerd[1575]: time="2025-03-17T17:27:51.507351240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:27:51.507418 containerd[1575]: time="2025-03-17T17:27:51.507376360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:27:51.507418 containerd[1575]: time="2025-03-17T17:27:51.507395480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:27:51.507590 containerd[1575]: time="2025-03-17T17:27:51.507572000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:27:51.507921 containerd[1575]: time="2025-03-17T17:27:51.507903440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:27:51.508067 containerd[1575]: time="2025-03-17T17:27:51.508036920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:27:51.508067 containerd[1575]: time="2025-03-17T17:27:51.508059680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:27:51.508112 containerd[1575]: time="2025-03-17T17:27:51.508074920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:27:51.508112 containerd[1575]: time="2025-03-17T17:27:51.508087960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508112 containerd[1575]: time="2025-03-17T17:27:51.508100920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508160 containerd[1575]: time="2025-03-17T17:27:51.508113280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508160 containerd[1575]: time="2025-03-17T17:27:51.508126240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508160 containerd[1575]: time="2025-03-17T17:27:51.508141640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508160 containerd[1575]: time="2025-03-17T17:27:51.508154120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508166240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508179560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508200000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508213600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508225200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508241 containerd[1575]: time="2025-03-17T17:27:51.508237080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508249440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508263640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508275840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508289000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508309400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508324960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508343 containerd[1575]: time="2025-03-17T17:27:51.508336560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508348000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508361760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508376800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508398720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508412480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508457 containerd[1575]: time="2025-03-17T17:27:51.508423520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:27:51.508659 containerd[1575]: time="2025-03-17T17:27:51.508616600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:27:51.508659 containerd[1575]: time="2025-03-17T17:27:51.508640680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:27:51.508659 containerd[1575]: time="2025-03-17T17:27:51.508652160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:27:51.508740 containerd[1575]: time="2025-03-17T17:27:51.508664280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:27:51.508740 containerd[1575]: time="2025-03-17T17:27:51.508673520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.508740 containerd[1575]: time="2025-03-17T17:27:51.508685280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:27:51.508740 containerd[1575]: time="2025-03-17T17:27:51.508704440Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:27:51.508740 containerd[1575]: time="2025-03-17T17:27:51.508715600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:27:51.509116 containerd[1575]: time="2025-03-17T17:27:51.509064000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:27:51.509116 containerd[1575]: time="2025-03-17T17:27:51.509117800Z" level=info msg="Connect containerd service" Mar 17 17:27:51.509250 containerd[1575]: time="2025-03-17T17:27:51.509155560Z" level=info msg="using legacy CRI server" Mar 17 17:27:51.509250 containerd[1575]: time="2025-03-17T17:27:51.509162680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:27:51.509411 containerd[1575]: time="2025-03-17T17:27:51.509393200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:27:51.513018 containerd[1575]: time="2025-03-17T17:27:51.512980160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:27:51.513596 containerd[1575]: time="2025-03-17T17:27:51.513543000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:27:51.513596 containerd[1575]: time="2025-03-17T17:27:51.513594400Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:27:51.513707 containerd[1575]: time="2025-03-17T17:27:51.513549360Z" level=info msg="Start subscribing containerd event" Mar 17 17:27:51.513707 containerd[1575]: time="2025-03-17T17:27:51.513659880Z" level=info msg="Start recovering state" Mar 17 17:27:51.513749 containerd[1575]: time="2025-03-17T17:27:51.513742880Z" level=info msg="Start event monitor" Mar 17 17:27:51.513785 containerd[1575]: time="2025-03-17T17:27:51.513755680Z" level=info msg="Start snapshots syncer" Mar 17 17:27:51.513785 containerd[1575]: time="2025-03-17T17:27:51.513770360Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:27:51.513785 containerd[1575]: time="2025-03-17T17:27:51.513778120Z" level=info msg="Start streaming server" Mar 17 17:27:51.513935 containerd[1575]: time="2025-03-17T17:27:51.513912920Z" level=info msg="containerd successfully booted in 0.050262s" Mar 17 17:27:51.514018 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:27:51.655965 tar[1573]: linux-arm64/LICENSE Mar 17 17:27:51.656530 tar[1573]: linux-arm64/README.md Mar 17 17:27:51.673672 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:27:51.778103 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:27:51.797726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:27:51.813244 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:27:51.820027 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:27:51.820289 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:27:51.823115 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:27:51.836958 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:27:51.842055 systemd-networkd[1237]: eth0: Gained IPv6LL Mar 17 17:27:51.854411 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:27:51.856920 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:27:51.858070 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:27:51.859525 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:27:51.861510 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:27:51.864238 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:27:51.866865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:27:51.869226 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:27:51.889100 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:27:51.889348 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:27:51.891307 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:27:51.900917 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:27:52.382174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:27:52.383515 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:27:52.384807 systemd[1]: Startup finished in 5.464s (kernel) + 3.815s (userspace) = 9.279s. Mar 17 17:27:52.387684 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:27:53.014169 kubelet[1680]: E0317 17:27:53.014071 1680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:27:53.017016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:27:53.017255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:27:56.443556 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:27:56.456228 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:46780.service - OpenSSH per-connection server daemon (10.0.0.1:46780). Mar 17 17:27:56.523350 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 46780 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:56.524904 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:56.532375 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:27:56.548308 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:27:56.550093 systemd-logind[1559]: New session 1 of user core. Mar 17 17:27:56.558378 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:27:56.560901 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:27:56.568389 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:27:56.638990 systemd[1700]: Queued start job for default target default.target. Mar 17 17:27:56.639692 systemd[1700]: Created slice app.slice - User Application Slice. Mar 17 17:27:56.639721 systemd[1700]: Reached target paths.target - Paths. Mar 17 17:27:56.639733 systemd[1700]: Reached target timers.target - Timers. Mar 17 17:27:56.647083 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:27:56.653385 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:27:56.653455 systemd[1700]: Reached target sockets.target - Sockets. Mar 17 17:27:56.653475 systemd[1700]: Reached target basic.target - Basic System. Mar 17 17:27:56.653519 systemd[1700]: Reached target default.target - Main User Target. Mar 17 17:27:56.653545 systemd[1700]: Startup finished in 79ms. Mar 17 17:27:56.653886 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:27:56.655398 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:27:56.715227 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:46782.service - OpenSSH per-connection server daemon (10.0.0.1:46782). Mar 17 17:27:56.755373 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 46782 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:56.756676 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:56.760490 systemd-logind[1559]: New session 2 of user core. Mar 17 17:27:56.770334 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:27:56.823997 sshd[1715]: Connection closed by 10.0.0.1 port 46782 Mar 17 17:27:56.823989 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:56.834284 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:46786.service - OpenSSH per-connection server daemon (10.0.0.1:46786). Mar 17 17:27:56.834714 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:46782.service: Deactivated successfully. Mar 17 17:27:56.836699 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:27:56.837319 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:27:56.838884 systemd-logind[1559]: Removed session 2. Mar 17 17:27:56.874406 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:56.875758 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:56.880053 systemd-logind[1559]: New session 3 of user core. Mar 17 17:27:56.890234 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:27:56.941532 sshd[1723]: Connection closed by 10.0.0.1 port 46786 Mar 17 17:27:56.942087 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:56.950264 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:46792.service - OpenSSH per-connection server daemon (10.0.0.1:46792). Mar 17 17:27:56.950662 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:46786.service: Deactivated successfully. Mar 17 17:27:56.953411 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:27:56.954061 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:27:56.955071 systemd-logind[1559]: Removed session 3. Mar 17 17:27:56.990523 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 46792 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:56.991810 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:56.996864 systemd-logind[1559]: New session 4 of user core. Mar 17 17:27:57.008302 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:27:57.062996 sshd[1731]: Connection closed by 10.0.0.1 port 46792 Mar 17 17:27:57.063009 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:57.091285 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:46808.service - OpenSSH per-connection server daemon (10.0.0.1:46808). Mar 17 17:27:57.091750 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:46792.service: Deactivated successfully. Mar 17 17:27:57.093396 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:27:57.094029 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:27:57.095350 systemd-logind[1559]: Removed session 4. Mar 17 17:27:57.131725 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 46808 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:57.133191 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:57.137506 systemd-logind[1559]: New session 5 of user core. Mar 17 17:27:57.152277 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:27:57.214251 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:27:57.214543 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:27:57.235102 sudo[1740]: pam_unix(sudo:session): session closed for user root Mar 17 17:27:57.238050 sshd[1739]: Connection closed by 10.0.0.1 port 46808 Mar 17 17:27:57.237915 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:57.256270 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:46820.service - OpenSSH per-connection server daemon (10.0.0.1:46820). Mar 17 17:27:57.256701 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:46808.service: Deactivated successfully. Mar 17 17:27:57.258681 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:27:57.259441 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:27:57.260857 systemd-logind[1559]: Removed session 5. Mar 17 17:27:57.297751 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 46820 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:57.299283 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:57.303780 systemd-logind[1559]: New session 6 of user core. Mar 17 17:27:57.317250 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:27:57.370935 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:27:57.371610 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:27:57.374921 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 17 17:27:57.380485 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:27:57.380777 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:27:57.400274 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:27:57.426242 augenrules[1772]: No rules Mar 17 17:27:57.427465 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:27:57.427729 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:27:57.429148 sudo[1749]: pam_unix(sudo:session): session closed for user root Mar 17 17:27:57.430424 sshd[1748]: Connection closed by 10.0.0.1 port 46820 Mar 17 17:27:57.431021 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:57.441227 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:46836.service - OpenSSH per-connection server daemon (10.0.0.1:46836). Mar 17 17:27:57.441652 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:46820.service: Deactivated successfully. Mar 17 17:27:57.444333 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:27:57.445022 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:27:57.445970 systemd-logind[1559]: Removed session 6. Mar 17 17:27:57.485179 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 46836 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:27:57.486547 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:57.490540 systemd-logind[1559]: New session 7 of user core. Mar 17 17:27:57.500257 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:27:57.554341 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:27:57.554653 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:27:57.925182 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:27:57.925321 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:27:58.179301 dockerd[1806]: time="2025-03-17T17:27:58.179173304Z" level=info msg="Starting up" Mar 17 17:27:58.461473 dockerd[1806]: time="2025-03-17T17:27:58.461359284Z" level=info msg="Loading containers: start." Mar 17 17:27:58.597982 kernel: Initializing XFRM netlink socket Mar 17 17:27:58.669869 systemd-networkd[1237]: docker0: Link UP Mar 17 17:27:58.707411 dockerd[1806]: time="2025-03-17T17:27:58.707354178Z" level=info msg="Loading containers: done." Mar 17 17:27:58.722672 dockerd[1806]: time="2025-03-17T17:27:58.722569622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:27:58.722804 dockerd[1806]: time="2025-03-17T17:27:58.722675098Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:27:58.722804 dockerd[1806]: time="2025-03-17T17:27:58.722783910Z" level=info msg="Daemon has completed initialization" Mar 17 17:27:58.750844 dockerd[1806]: time="2025-03-17T17:27:58.750728029Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:27:58.750977 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:27:59.601462 containerd[1575]: time="2025-03-17T17:27:59.601413184Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:28:00.214644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241805241.mount: Deactivated successfully. Mar 17 17:28:01.250674 containerd[1575]: time="2025-03-17T17:28:01.250628226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:01.252349 containerd[1575]: time="2025-03-17T17:28:01.252268869Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 17 17:28:01.252963 containerd[1575]: time="2025-03-17T17:28:01.252910708Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:01.256977 containerd[1575]: time="2025-03-17T17:28:01.256725297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:01.257562 containerd[1575]: time="2025-03-17T17:28:01.257528223Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 1.655654097s" Mar 17 17:28:01.257613 containerd[1575]: time="2025-03-17T17:28:01.257563094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:28:01.276707 containerd[1575]: time="2025-03-17T17:28:01.276639232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:28:02.535518 containerd[1575]: time="2025-03-17T17:28:02.535442659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:02.537829 containerd[1575]: time="2025-03-17T17:28:02.537780679Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 17 17:28:02.539049 containerd[1575]: time="2025-03-17T17:28:02.538988106Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:02.542640 containerd[1575]: time="2025-03-17T17:28:02.542596007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:02.543912 containerd[1575]: time="2025-03-17T17:28:02.543861849Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.267184927s" Mar 17 17:28:02.543912 containerd[1575]: time="2025-03-17T17:28:02.543895136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:28:02.562586 containerd[1575]: time="2025-03-17T17:28:02.562540677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:28:03.267650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:28:03.278137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:03.371162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:03.375209 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:28:03.415799 kubelet[2092]: E0317 17:28:03.415705 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:28:03.419135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:28:03.419316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:28:03.687885 containerd[1575]: time="2025-03-17T17:28:03.687759858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:03.688781 containerd[1575]: time="2025-03-17T17:28:03.688242736Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 17 17:28:03.689427 containerd[1575]: time="2025-03-17T17:28:03.689391951Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:03.692538 containerd[1575]: time="2025-03-17T17:28:03.692495031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:03.693797 containerd[1575]: time="2025-03-17T17:28:03.693761980Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.131179341s" Mar 17 17:28:03.693843 containerd[1575]: time="2025-03-17T17:28:03.693796437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:28:03.712641 containerd[1575]: time="2025-03-17T17:28:03.712560408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:28:04.651612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231101709.mount: Deactivated successfully. Mar 17 17:28:05.029666 containerd[1575]: time="2025-03-17T17:28:05.029546034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:05.030544 containerd[1575]: time="2025-03-17T17:28:05.030505694Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 17 17:28:05.031657 containerd[1575]: time="2025-03-17T17:28:05.031605761Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:05.033387 containerd[1575]: time="2025-03-17T17:28:05.033330474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:05.034102 containerd[1575]: time="2025-03-17T17:28:05.034072214Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.321460503s" Mar 17 17:28:05.034152 containerd[1575]: time="2025-03-17T17:28:05.034108815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:28:05.055375 containerd[1575]: time="2025-03-17T17:28:05.055272943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:28:05.574731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522261734.mount: Deactivated successfully. Mar 17 17:28:06.240573 containerd[1575]: time="2025-03-17T17:28:06.240523800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.241638 containerd[1575]: time="2025-03-17T17:28:06.241345353Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 17 17:28:06.242546 containerd[1575]: time="2025-03-17T17:28:06.242513672Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.245929 containerd[1575]: time="2025-03-17T17:28:06.245897205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.247180 containerd[1575]: time="2025-03-17T17:28:06.247136374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.191759109s" Mar 17 17:28:06.247180 containerd[1575]: time="2025-03-17T17:28:06.247166991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:28:06.266248 containerd[1575]: time="2025-03-17T17:28:06.266200297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:28:06.889456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269325605.mount: Deactivated successfully. Mar 17 17:28:06.894695 containerd[1575]: time="2025-03-17T17:28:06.894644001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.895408 containerd[1575]: time="2025-03-17T17:28:06.895345056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 17 17:28:06.896034 containerd[1575]: time="2025-03-17T17:28:06.896007046Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.898449 containerd[1575]: time="2025-03-17T17:28:06.898407009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:06.899757 containerd[1575]: time="2025-03-17T17:28:06.899719924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 633.476377ms" Mar 17 17:28:06.899799 containerd[1575]: time="2025-03-17T17:28:06.899756576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:28:06.918718 containerd[1575]: time="2025-03-17T17:28:06.918672806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:28:07.393955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992576863.mount: Deactivated successfully. Mar 17 17:28:08.957971 containerd[1575]: time="2025-03-17T17:28:08.957910537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:08.958484 containerd[1575]: time="2025-03-17T17:28:08.958434702Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 17 17:28:08.959384 containerd[1575]: time="2025-03-17T17:28:08.959339595Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:08.964193 containerd[1575]: time="2025-03-17T17:28:08.962864147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:08.964193 containerd[1575]: time="2025-03-17T17:28:08.963994079Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.045272531s" Mar 17 17:28:08.964193 containerd[1575]: time="2025-03-17T17:28:08.964038315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:28:13.608283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:28:13.618172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:13.627282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:28:13.627358 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:28:13.627608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:13.630712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:13.646638 systemd[1]: Reloading requested from client PID 2320 ('systemctl') (unit session-7.scope)... Mar 17 17:28:13.646655 systemd[1]: Reloading... Mar 17 17:28:13.707085 zram_generator::config[2359]: No configuration found. Mar 17 17:28:13.809287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:28:13.858544 systemd[1]: Reloading finished in 211 ms. Mar 17 17:28:13.906791 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:28:13.906855 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:28:13.907222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:13.909659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:14.006492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:14.010684 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:28:14.053824 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:28:14.053824 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:28:14.053824 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:28:14.054828 kubelet[2417]: I0317 17:28:14.054764 2417 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:28:14.480799 kubelet[2417]: I0317 17:28:14.480754 2417 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:28:14.480799 kubelet[2417]: I0317 17:28:14.480787 2417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:28:14.481030 kubelet[2417]: I0317 17:28:14.481014 2417 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:28:14.504973 kubelet[2417]: E0317 17:28:14.504904 2417 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.505541 kubelet[2417]: I0317 17:28:14.505499 2417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:28:14.519170 kubelet[2417]: I0317 17:28:14.519138 2417 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:28:14.520427 kubelet[2417]: I0317 17:28:14.520370 2417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:28:14.520592 kubelet[2417]: I0317 17:28:14.520422 2417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:28:14.520683 kubelet[2417]: I0317 17:28:14.520658 2417 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:28:14.520683 kubelet[2417]: I0317 17:28:14.520669 2417 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:28:14.520939 kubelet[2417]: I0317 17:28:14.520914 2417 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:28:14.524065 kubelet[2417]: I0317 17:28:14.524035 2417 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:28:14.524089 kubelet[2417]: I0317 17:28:14.524065 2417 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:28:14.524278 kubelet[2417]: I0317 17:28:14.524267 2417 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:28:14.525286 kubelet[2417]: I0317 17:28:14.524365 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:28:14.527230 kubelet[2417]: W0317 17:28:14.525796 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.527230 kubelet[2417]: E0317 17:28:14.525863 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.527230 kubelet[2417]: W0317 17:28:14.526188 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.527230 kubelet[2417]: E0317 17:28:14.526231 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.527379 kubelet[2417]: I0317 17:28:14.527305 2417 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:28:14.527844 kubelet[2417]: I0317 17:28:14.527773 2417 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:28:14.528212 kubelet[2417]: W0317 17:28:14.527944 2417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:28:14.529017 kubelet[2417]: I0317 17:28:14.528988 2417 server.go:1264] "Started kubelet" Mar 17 17:28:14.529574 kubelet[2417]: I0317 17:28:14.529518 2417 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:28:14.529891 kubelet[2417]: I0317 17:28:14.529821 2417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:28:14.530274 kubelet[2417]: I0317 17:28:14.530240 2417 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:28:14.533795 kubelet[2417]: I0317 17:28:14.532146 2417 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:28:14.533795 kubelet[2417]: E0317 17:28:14.532697 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da739188a921e default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:28:14.528959006 +0000 UTC m=+0.515109695,LastTimestamp:2025-03-17 17:28:14.528959006 +0000 UTC m=+0.515109695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:28:14.535042 kubelet[2417]: I0317 17:28:14.534189 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:28:14.535042 kubelet[2417]: I0317 17:28:14.534565 2417 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:28:14.535042 kubelet[2417]: I0317 17:28:14.535011 2417 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:28:14.535268 kubelet[2417]: I0317 17:28:14.535244 2417 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:28:14.540617 kubelet[2417]: W0317 17:28:14.535435 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.540617 kubelet[2417]: E0317 17:28:14.535490 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.540617 kubelet[2417]: E0317 17:28:14.535703 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Mar 17 17:28:14.543785 kubelet[2417]: E0317 17:28:14.542495 2417 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:28:14.544661 kubelet[2417]: I0317 17:28:14.544623 2417 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:28:14.544771 kubelet[2417]: I0317 17:28:14.544744 2417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:28:14.546749 kubelet[2417]: I0317 17:28:14.546719 2417 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:28:14.553126 kubelet[2417]: I0317 17:28:14.553075 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:28:14.555949 kubelet[2417]: I0317 17:28:14.554194 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:28:14.555949 kubelet[2417]: I0317 17:28:14.554359 2417 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:28:14.555949 kubelet[2417]: I0317 17:28:14.554379 2417 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:28:14.555949 kubelet[2417]: E0317 17:28:14.554434 2417 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:28:14.558353 kubelet[2417]: W0317 17:28:14.558117 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.558425 kubelet[2417]: E0317 17:28:14.558368 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:14.568515 kubelet[2417]: I0317 17:28:14.568483 2417 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:28:14.568515 kubelet[2417]: I0317 17:28:14.568505 2417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:28:14.568515 kubelet[2417]: I0317 17:28:14.568524 2417 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:28:14.570774 kubelet[2417]: I0317 17:28:14.570751 2417 policy_none.go:49] "None policy: Start" Mar 17 17:28:14.571426 kubelet[2417]: I0317 17:28:14.571404 2417 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:28:14.571426 kubelet[2417]: I0317 17:28:14.571427 2417 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:28:14.579162 kubelet[2417]: I0317 17:28:14.578025 2417 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:28:14.579162 kubelet[2417]: I0317 17:28:14.578214 2417 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:28:14.579162 kubelet[2417]: I0317 17:28:14.578337 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:28:14.580200 kubelet[2417]: E0317 17:28:14.580183 2417 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:28:14.636921 kubelet[2417]: I0317 17:28:14.636889 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:28:14.637254 kubelet[2417]: E0317 17:28:14.637220 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Mar 17 17:28:14.654605 kubelet[2417]: I0317 17:28:14.654550 2417 topology_manager.go:215] "Topology Admit Handler" podUID="886a4e9213c51a217bcf8874d1a81f9c" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:28:14.655733 kubelet[2417]: I0317 17:28:14.655707 2417 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:28:14.656603 kubelet[2417]: I0317 17:28:14.656574 2417 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:28:14.736197 kubelet[2417]: I0317 17:28:14.736028 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:14.736197 kubelet[2417]: I0317 17:28:14.736075 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:14.736197 kubelet[2417]: I0317 17:28:14.736098 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:14.736197 kubelet[2417]: I0317 17:28:14.736115 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:14.736197 kubelet[2417]: E0317 17:28:14.736119 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Mar 17 17:28:14.736381 kubelet[2417]: I0317 17:28:14.736131 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:14.736381 kubelet[2417]: I0317 17:28:14.736172 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:14.736381 kubelet[2417]: I0317 17:28:14.736192 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:14.736381 kubelet[2417]: I0317 17:28:14.736213 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:28:14.736381 kubelet[2417]: I0317 17:28:14.736229 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:14.838766 kubelet[2417]: I0317 17:28:14.838736 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:28:14.839108 kubelet[2417]: E0317 17:28:14.839080 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Mar 17 17:28:14.959844 kubelet[2417]: E0317 17:28:14.959813 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:14.960575 containerd[1575]: time="2025-03-17T17:28:14.960525844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:886a4e9213c51a217bcf8874d1a81f9c,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:14.961167 kubelet[2417]: E0317 17:28:14.961144 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:14.961524 containerd[1575]: time="2025-03-17T17:28:14.961496341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:14.964845 kubelet[2417]: E0317 17:28:14.964822 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:14.965505 containerd[1575]: time="2025-03-17T17:28:14.965261894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:15.137948 kubelet[2417]: E0317 17:28:15.137816 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Mar 17 17:28:15.240458 kubelet[2417]: I0317 17:28:15.240374 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:28:15.240733 kubelet[2417]: E0317 17:28:15.240709 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Mar 17 17:28:15.403011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851404312.mount: Deactivated successfully. Mar 17 17:28:15.413960 containerd[1575]: time="2025-03-17T17:28:15.413893364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:28:15.414091 containerd[1575]: time="2025-03-17T17:28:15.414005560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:28:15.415833 containerd[1575]: time="2025-03-17T17:28:15.415781180Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:28:15.416545 containerd[1575]: time="2025-03-17T17:28:15.416510454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:28:15.416904 containerd[1575]: time="2025-03-17T17:28:15.416764738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:28:15.420186 containerd[1575]: time="2025-03-17T17:28:15.420144881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:28:15.421378 containerd[1575]: time="2025-03-17T17:28:15.420843902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:28:15.421801 containerd[1575]: time="2025-03-17T17:28:15.421757377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:28:15.423022 containerd[1575]: time="2025-03-17T17:28:15.422992414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 462.37547ms" Mar 17 17:28:15.423954 containerd[1575]: time="2025-03-17T17:28:15.423756749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.43145ms" Mar 17 17:28:15.426958 containerd[1575]: time="2025-03-17T17:28:15.426779027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.223089ms" Mar 17 17:28:15.501714 kubelet[2417]: E0317 17:28:15.501587 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da739188a921e default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:28:14.528959006 +0000 UTC m=+0.515109695,LastTimestamp:2025-03-17 17:28:14.528959006 +0000 UTC m=+0.515109695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:28:15.609122 containerd[1575]: time="2025-03-17T17:28:15.608965388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:15.609122 containerd[1575]: time="2025-03-17T17:28:15.609038876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:15.609122 containerd[1575]: time="2025-03-17T17:28:15.609054784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.609503 containerd[1575]: time="2025-03-17T17:28:15.609141856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609669417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609718543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609738297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609813509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609605145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609653269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609667614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.609985 containerd[1575]: time="2025-03-17T17:28:15.609735613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:15.666023 containerd[1575]: time="2025-03-17T17:28:15.665805090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:886a4e9213c51a217bcf8874d1a81f9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57b35e0182952700ca9bd4780a655486af9920c88ba5c3b6987eba9c71e11fa\"" Mar 17 17:28:15.668684 kubelet[2417]: E0317 17:28:15.668641 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:15.670410 containerd[1575]: time="2025-03-17T17:28:15.670341652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"446a1857e04c2e02354c9a798773061fbee957994811cd549b42ee4dd25fb4e1\"" Mar 17 17:28:15.670880 kubelet[2417]: E0317 17:28:15.670860 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:15.674771 containerd[1575]: time="2025-03-17T17:28:15.674345364Z" level=info msg="CreateContainer within sandbox \"a57b35e0182952700ca9bd4780a655486af9920c88ba5c3b6987eba9c71e11fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:28:15.675139 containerd[1575]: time="2025-03-17T17:28:15.675109058Z" level=info msg="CreateContainer within sandbox \"446a1857e04c2e02354c9a798773061fbee957994811cd549b42ee4dd25fb4e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:28:15.677180 containerd[1575]: time="2025-03-17T17:28:15.677144051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"568d469b3a322d0f2e7b85deb8e9e3efd94cdaf336876c2446d085428fd41bcb\"" Mar 17 17:28:15.679065 kubelet[2417]: E0317 17:28:15.679042 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:15.686045 containerd[1575]: time="2025-03-17T17:28:15.685916611Z" level=info msg="CreateContainer within sandbox \"568d469b3a322d0f2e7b85deb8e9e3efd94cdaf336876c2446d085428fd41bcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:28:15.697073 containerd[1575]: time="2025-03-17T17:28:15.697019441Z" level=info msg="CreateContainer within sandbox \"a57b35e0182952700ca9bd4780a655486af9920c88ba5c3b6987eba9c71e11fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b83413ff2d95e501b59d508286f86ee04868a4693d6a209dda72b382461e50d5\"" Mar 17 17:28:15.697961 containerd[1575]: time="2025-03-17T17:28:15.697905148Z" level=info msg="StartContainer for \"b83413ff2d95e501b59d508286f86ee04868a4693d6a209dda72b382461e50d5\"" Mar 17 17:28:15.703871 containerd[1575]: time="2025-03-17T17:28:15.701951494Z" level=info msg="CreateContainer within sandbox \"446a1857e04c2e02354c9a798773061fbee957994811cd549b42ee4dd25fb4e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"907d75c7256582dab9df39f2a137488aecf13f9590a9b59b6b02473d715fbd34\"" Mar 17 17:28:15.703871 containerd[1575]: time="2025-03-17T17:28:15.702399877Z" level=info msg="StartContainer for \"907d75c7256582dab9df39f2a137488aecf13f9590a9b59b6b02473d715fbd34\"" Mar 17 17:28:15.714417 containerd[1575]: time="2025-03-17T17:28:15.714351990Z" level=info msg="CreateContainer within sandbox \"568d469b3a322d0f2e7b85deb8e9e3efd94cdaf336876c2446d085428fd41bcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"951a16765400c558fc66cd314ed1c861305437d38e8f469091e0a019dba1c5db\"" Mar 17 17:28:15.715159 containerd[1575]: time="2025-03-17T17:28:15.715111596Z" level=info msg="StartContainer for \"951a16765400c558fc66cd314ed1c861305437d38e8f469091e0a019dba1c5db\"" Mar 17 17:28:15.773156 containerd[1575]: time="2025-03-17T17:28:15.773117615Z" level=info msg="StartContainer for \"b83413ff2d95e501b59d508286f86ee04868a4693d6a209dda72b382461e50d5\" returns successfully" Mar 17 17:28:15.773370 containerd[1575]: time="2025-03-17T17:28:15.773091810Z" level=info msg="StartContainer for \"907d75c7256582dab9df39f2a137488aecf13f9590a9b59b6b02473d715fbd34\" returns successfully" Mar 17 17:28:15.824125 containerd[1575]: time="2025-03-17T17:28:15.824080734Z" level=info msg="StartContainer for \"951a16765400c558fc66cd314ed1c861305437d38e8f469091e0a019dba1c5db\" returns successfully" Mar 17 17:28:15.855413 kubelet[2417]: W0317 17:28:15.854429 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:15.855413 kubelet[2417]: E0317 17:28:15.854503 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:15.938523 kubelet[2417]: E0317 17:28:15.938377 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Mar 17 17:28:15.946556 kubelet[2417]: W0317 17:28:15.946479 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:15.946556 kubelet[2417]: E0317 17:28:15.946556 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Mar 17 17:28:16.042269 kubelet[2417]: I0317 17:28:16.042237 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:28:16.572319 kubelet[2417]: E0317 17:28:16.572284 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:16.574156 kubelet[2417]: E0317 17:28:16.574133 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:16.575038 kubelet[2417]: E0317 17:28:16.575019 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:17.508663 kubelet[2417]: I0317 17:28:17.508621 2417 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:28:17.571873 kubelet[2417]: E0317 17:28:17.571805 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:17.589771 kubelet[2417]: E0317 17:28:17.589704 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:17.671996 kubelet[2417]: E0317 17:28:17.671944 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:17.772716 kubelet[2417]: E0317 17:28:17.772448 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:17.873041 kubelet[2417]: E0317 17:28:17.873002 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:17.974028 kubelet[2417]: E0317 17:28:17.973989 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:18.075130 kubelet[2417]: E0317 17:28:18.075012 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:18.175564 kubelet[2417]: E0317 17:28:18.175514 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:18.276106 kubelet[2417]: E0317 17:28:18.276059 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:28:18.529222 kubelet[2417]: I0317 17:28:18.529105 2417 apiserver.go:52] "Watching apiserver" Mar 17 17:28:18.536098 kubelet[2417]: I0317 17:28:18.536032 2417 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:28:19.524382 systemd[1]: Reloading requested from client PID 2695 ('systemctl') (unit session-7.scope)... Mar 17 17:28:19.524725 systemd[1]: Reloading... Mar 17 17:28:19.589993 zram_generator::config[2734]: No configuration found. Mar 17 17:28:19.759601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:28:19.816292 systemd[1]: Reloading finished in 291 ms. Mar 17 17:28:19.845030 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:19.845187 kubelet[2417]: I0317 17:28:19.845143 2417 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:28:19.861006 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:28:19.861349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:19.875195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:28:19.962130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:28:19.966506 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:28:20.014423 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:28:20.014423 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:28:20.014423 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:28:20.014806 kubelet[2786]: I0317 17:28:20.014446 2786 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:28:20.019151 kubelet[2786]: I0317 17:28:20.018708 2786 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:28:20.019151 kubelet[2786]: I0317 17:28:20.018740 2786 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:28:20.019151 kubelet[2786]: I0317 17:28:20.018961 2786 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:28:20.020811 kubelet[2786]: I0317 17:28:20.020439 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:28:20.021617 kubelet[2786]: I0317 17:28:20.021598 2786 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:28:20.032072 kubelet[2786]: I0317 17:28:20.032006 2786 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:28:20.033057 kubelet[2786]: I0317 17:28:20.032537 2786 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:28:20.033057 kubelet[2786]: I0317 17:28:20.032589 2786 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:28:20.033057 kubelet[2786]: I0317 17:28:20.032795 2786 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:28:20.033057 kubelet[2786]: I0317 17:28:20.032839 2786 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:28:20.033057 kubelet[2786]: I0317 17:28:20.032884 2786 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:28:20.033257 kubelet[2786]: I0317 17:28:20.033039 2786 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:28:20.036000 kubelet[2786]: I0317 17:28:20.033794 2786 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:28:20.036000 kubelet[2786]: I0317 17:28:20.033832 2786 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:28:20.036000 kubelet[2786]: I0317 17:28:20.033842 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:28:20.039957 kubelet[2786]: I0317 17:28:20.039643 2786 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:28:20.039957 kubelet[2786]: I0317 17:28:20.039861 2786 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:28:20.040969 kubelet[2786]: I0317 17:28:20.040595 2786 server.go:1264] "Started kubelet" Mar 17 17:28:20.041103 kubelet[2786]: I0317 17:28:20.041071 2786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:28:20.041252 kubelet[2786]: I0317 17:28:20.041192 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:28:20.041474 kubelet[2786]: I0317 17:28:20.041446 2786 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:28:20.044433 kubelet[2786]: I0317 17:28:20.044402 2786 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:28:20.052049 kubelet[2786]: I0317 17:28:20.051315 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:28:20.052319 kubelet[2786]: I0317 17:28:20.052293 2786 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:28:20.052405 kubelet[2786]: I0317 17:28:20.052391 2786 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:28:20.055011 kubelet[2786]: I0317 17:28:20.054979 2786 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:28:20.056654 kubelet[2786]: E0317 17:28:20.056623 2786 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:28:20.059634 kubelet[2786]: I0317 17:28:20.059500 2786 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:28:20.063555 kubelet[2786]: I0317 17:28:20.063495 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:28:20.070040 kubelet[2786]: I0317 17:28:20.069944 2786 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:28:20.072184 kubelet[2786]: I0317 17:28:20.072142 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:28:20.073209 kubelet[2786]: I0317 17:28:20.073127 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:28:20.073209 kubelet[2786]: I0317 17:28:20.073159 2786 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:28:20.073209 kubelet[2786]: I0317 17:28:20.073176 2786 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:28:20.073209 kubelet[2786]: E0317 17:28:20.073212 2786 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:28:20.122563 kubelet[2786]: I0317 17:28:20.122529 2786 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:28:20.122563 kubelet[2786]: I0317 17:28:20.122551 2786 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:28:20.122563 kubelet[2786]: I0317 17:28:20.122574 2786 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:28:20.122752 kubelet[2786]: I0317 17:28:20.122734 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:28:20.122786 kubelet[2786]: I0317 17:28:20.122750 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:28:20.122786 kubelet[2786]: I0317 17:28:20.122769 2786 policy_none.go:49] "None policy: Start" Mar 17 17:28:20.124016 kubelet[2786]: I0317 17:28:20.123600 2786 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:28:20.124016 kubelet[2786]: I0317 17:28:20.123626 2786 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:28:20.124016 kubelet[2786]: I0317 17:28:20.123801 2786 state_mem.go:75] "Updated machine memory state" Mar 17 17:28:20.125230 kubelet[2786]: I0317 17:28:20.125196 2786 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:28:20.125421 kubelet[2786]: I0317 17:28:20.125372 2786 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:28:20.126160 kubelet[2786]: I0317 17:28:20.125547 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:28:20.156772 kubelet[2786]: I0317 17:28:20.156739 2786 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:28:20.163637 kubelet[2786]: I0317 17:28:20.163594 2786 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:28:20.163749 kubelet[2786]: I0317 17:28:20.163697 2786 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:28:20.173376 kubelet[2786]: I0317 17:28:20.173333 2786 topology_manager.go:215] "Topology Admit Handler" podUID="886a4e9213c51a217bcf8874d1a81f9c" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:28:20.173598 kubelet[2786]: I0317 17:28:20.173455 2786 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:28:20.174093 kubelet[2786]: I0317 17:28:20.174077 2786 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:28:20.256506 kubelet[2786]: I0317 17:28:20.256455 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:20.256506 kubelet[2786]: I0317 17:28:20.256498 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:20.256668 kubelet[2786]: I0317 17:28:20.256523 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:20.256668 kubelet[2786]: I0317 17:28:20.256567 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:20.256668 kubelet[2786]: I0317 17:28:20.256611 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:28:20.256668 kubelet[2786]: I0317 17:28:20.256634 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:28:20.256668 kubelet[2786]: I0317 17:28:20.256651 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:20.256785 kubelet[2786]: I0317 17:28:20.256665 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:20.256785 kubelet[2786]: I0317 17:28:20.256682 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/886a4e9213c51a217bcf8874d1a81f9c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"886a4e9213c51a217bcf8874d1a81f9c\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:20.500739 kubelet[2786]: E0317 17:28:20.500637 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:20.502008 kubelet[2786]: E0317 17:28:20.501922 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:20.502055 kubelet[2786]: E0317 17:28:20.502007 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:20.525736 sudo[2821]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:28:20.526033 sudo[2821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:28:20.949622 sudo[2821]: pam_unix(sudo:session): session closed for user root Mar 17 17:28:21.034244 kubelet[2786]: I0317 17:28:21.034210 2786 apiserver.go:52] "Watching apiserver" Mar 17 17:28:21.053523 kubelet[2786]: I0317 17:28:21.053476 2786 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:28:21.087594 kubelet[2786]: E0317 17:28:21.087100 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:21.087594 kubelet[2786]: E0317 17:28:21.087278 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:21.098421 kubelet[2786]: E0317 17:28:21.098366 2786 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:28:21.106961 kubelet[2786]: E0317 17:28:21.106047 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:21.118033 kubelet[2786]: I0317 17:28:21.117965 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.117950371 podStartE2EDuration="1.117950371s" podCreationTimestamp="2025-03-17 17:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:21.117428401 +0000 UTC m=+1.147670154" watchObservedRunningTime="2025-03-17 17:28:21.117950371 +0000 UTC m=+1.148192124" Mar 17 17:28:21.152439 kubelet[2786]: I0317 17:28:21.152380 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.152360766 podStartE2EDuration="1.152360766s" podCreationTimestamp="2025-03-17 17:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:21.135270879 +0000 UTC m=+1.165512632" watchObservedRunningTime="2025-03-17 17:28:21.152360766 +0000 UTC m=+1.182602519" Mar 17 17:28:22.088650 kubelet[2786]: E0317 17:28:22.088570 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:22.089107 kubelet[2786]: E0317 17:28:22.088807 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:22.610305 sudo[1785]: pam_unix(sudo:session): session closed for user root Mar 17 17:28:22.611453 sshd[1784]: Connection closed by 10.0.0.1 port 46836 Mar 17 17:28:22.612875 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Mar 17 17:28:22.616832 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:46836.service: Deactivated successfully. Mar 17 17:28:22.618657 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:28:22.618801 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:28:22.619889 systemd-logind[1559]: Removed session 7. Mar 17 17:28:23.090433 kubelet[2786]: E0317 17:28:23.090328 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:24.586264 kubelet[2786]: E0317 17:28:24.586227 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:31.893537 kubelet[2786]: E0317 17:28:31.893488 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:31.906661 kubelet[2786]: I0317 17:28:31.906068 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=11.906034449 podStartE2EDuration="11.906034449s" podCreationTimestamp="2025-03-17 17:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:21.153237934 +0000 UTC m=+1.183479647" watchObservedRunningTime="2025-03-17 17:28:31.906034449 +0000 UTC m=+11.936276202" Mar 17 17:28:32.051881 kubelet[2786]: E0317 17:28:32.051244 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:33.388054 kubelet[2786]: I0317 17:28:33.388021 2786 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:28:33.392536 containerd[1575]: time="2025-03-17T17:28:33.392495441Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:28:33.392770 kubelet[2786]: I0317 17:28:33.392705 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:28:34.221782 kubelet[2786]: I0317 17:28:34.220899 2786 topology_manager.go:215] "Topology Admit Handler" podUID="c4051186-2e7e-474c-ae62-6b6496760d72" podNamespace="kube-system" podName="kube-proxy-d7mnt" Mar 17 17:28:34.243076 kubelet[2786]: I0317 17:28:34.243040 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4051186-2e7e-474c-ae62-6b6496760d72-kube-proxy\") pod \"kube-proxy-d7mnt\" (UID: \"c4051186-2e7e-474c-ae62-6b6496760d72\") " pod="kube-system/kube-proxy-d7mnt" Mar 17 17:28:34.243228 kubelet[2786]: I0317 17:28:34.243212 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4051186-2e7e-474c-ae62-6b6496760d72-xtables-lock\") pod \"kube-proxy-d7mnt\" (UID: \"c4051186-2e7e-474c-ae62-6b6496760d72\") " pod="kube-system/kube-proxy-d7mnt" Mar 17 17:28:34.243953 kubelet[2786]: I0317 17:28:34.243883 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4051186-2e7e-474c-ae62-6b6496760d72-lib-modules\") pod \"kube-proxy-d7mnt\" (UID: \"c4051186-2e7e-474c-ae62-6b6496760d72\") " pod="kube-system/kube-proxy-d7mnt" Mar 17 17:28:34.243953 kubelet[2786]: I0317 17:28:34.243925 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hksnn\" (UniqueName: \"kubernetes.io/projected/c4051186-2e7e-474c-ae62-6b6496760d72-kube-api-access-hksnn\") pod \"kube-proxy-d7mnt\" (UID: \"c4051186-2e7e-474c-ae62-6b6496760d72\") " pod="kube-system/kube-proxy-d7mnt" Mar 17 17:28:34.244266 kubelet[2786]: I0317 17:28:34.244225 2786 topology_manager.go:215] "Topology Admit Handler" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" podNamespace="kube-system" podName="cilium-lbqtl" Mar 17 17:28:34.344843 kubelet[2786]: I0317 17:28:34.344787 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-hostproc\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344843 kubelet[2786]: I0317 17:28:34.344843 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-lib-modules\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344997 kubelet[2786]: I0317 17:28:34.344899 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-config-path\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344997 kubelet[2786]: I0317 17:28:34.344917 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55pg\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-kube-api-access-k55pg\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344997 kubelet[2786]: I0317 17:28:34.344958 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-etc-cni-netd\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344997 kubelet[2786]: I0317 17:28:34.344974 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-xtables-lock\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.344997 kubelet[2786]: I0317 17:28:34.344988 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c708f516-f52e-49f7-a1be-3d5d226647a7-clustermesh-secrets\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345004 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cni-path\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345020 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-hubble-tls\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345035 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-run\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345049 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-cgroup\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345065 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-net\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.345128 kubelet[2786]: I0317 17:28:34.345083 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-kernel\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.348526 kubelet[2786]: I0317 17:28:34.348453 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-bpf-maps\") pod \"cilium-lbqtl\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " pod="kube-system/cilium-lbqtl" Mar 17 17:28:34.440247 kubelet[2786]: I0317 17:28:34.437858 2786 topology_manager.go:215] "Topology Admit Handler" podUID="969cb279-5454-45a0-901f-c96c19ca75f7" podNamespace="kube-system" podName="cilium-operator-599987898-k7vnn" Mar 17 17:28:34.528915 kubelet[2786]: E0317 17:28:34.528388 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.533056 containerd[1575]: time="2025-03-17T17:28:34.533018644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7mnt,Uid:c4051186-2e7e-474c-ae62-6b6496760d72,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:34.550746 kubelet[2786]: I0317 17:28:34.550652 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5gt\" (UniqueName: \"kubernetes.io/projected/969cb279-5454-45a0-901f-c96c19ca75f7-kube-api-access-bc5gt\") pod \"cilium-operator-599987898-k7vnn\" (UID: \"969cb279-5454-45a0-901f-c96c19ca75f7\") " pod="kube-system/cilium-operator-599987898-k7vnn" Mar 17 17:28:34.550746 kubelet[2786]: I0317 17:28:34.550695 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/969cb279-5454-45a0-901f-c96c19ca75f7-cilium-config-path\") pod \"cilium-operator-599987898-k7vnn\" (UID: \"969cb279-5454-45a0-901f-c96c19ca75f7\") " pod="kube-system/cilium-operator-599987898-k7vnn" Mar 17 17:28:34.550746 kubelet[2786]: E0317 17:28:34.550705 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.552805 containerd[1575]: time="2025-03-17T17:28:34.552729269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbqtl,Uid:c708f516-f52e-49f7-a1be-3d5d226647a7,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:34.553821 containerd[1575]: time="2025-03-17T17:28:34.553726076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:34.553852 containerd[1575]: time="2025-03-17T17:28:34.553832040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:34.553873 containerd[1575]: time="2025-03-17T17:28:34.553852248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.556120 containerd[1575]: time="2025-03-17T17:28:34.556044665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.573059 containerd[1575]: time="2025-03-17T17:28:34.572946941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:34.573059 containerd[1575]: time="2025-03-17T17:28:34.573011727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:34.573922 containerd[1575]: time="2025-03-17T17:28:34.573861115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.575308 containerd[1575]: time="2025-03-17T17:28:34.574918747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.595559 kubelet[2786]: E0317 17:28:34.595524 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.608778 containerd[1575]: time="2025-03-17T17:28:34.608722178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7mnt,Uid:c4051186-2e7e-474c-ae62-6b6496760d72,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f21f5c44695e33bec7104b33f53fd81a8feb53e7b2558e34da97f48cb249b7\"" Mar 17 17:28:34.613820 kubelet[2786]: E0317 17:28:34.613778 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.618974 containerd[1575]: time="2025-03-17T17:28:34.618841839Z" level=info msg="CreateContainer within sandbox \"87f21f5c44695e33bec7104b33f53fd81a8feb53e7b2558e34da97f48cb249b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:28:34.623384 containerd[1575]: time="2025-03-17T17:28:34.623352364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbqtl,Uid:c708f516-f52e-49f7-a1be-3d5d226647a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\"" Mar 17 17:28:34.625498 kubelet[2786]: E0317 17:28:34.625201 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.626541 containerd[1575]: time="2025-03-17T17:28:34.626485646Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:28:34.636938 containerd[1575]: time="2025-03-17T17:28:34.636886462Z" level=info msg="CreateContainer within sandbox \"87f21f5c44695e33bec7104b33f53fd81a8feb53e7b2558e34da97f48cb249b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82373d3e46dea1794e72209d0eb05ed0ad41f62b17feb216d2bb1db20ff6448f\"" Mar 17 17:28:34.637517 containerd[1575]: time="2025-03-17T17:28:34.637483226Z" level=info msg="StartContainer for \"82373d3e46dea1794e72209d0eb05ed0ad41f62b17feb216d2bb1db20ff6448f\"" Mar 17 17:28:34.693044 containerd[1575]: time="2025-03-17T17:28:34.692994938Z" level=info msg="StartContainer for \"82373d3e46dea1794e72209d0eb05ed0ad41f62b17feb216d2bb1db20ff6448f\" returns successfully" Mar 17 17:28:34.753122 kubelet[2786]: E0317 17:28:34.752650 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:34.754503 containerd[1575]: time="2025-03-17T17:28:34.754467450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-k7vnn,Uid:969cb279-5454-45a0-901f-c96c19ca75f7,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:34.804908 containerd[1575]: time="2025-03-17T17:28:34.804508684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:34.804908 containerd[1575]: time="2025-03-17T17:28:34.804586956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:34.804908 containerd[1575]: time="2025-03-17T17:28:34.804602603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.806107 containerd[1575]: time="2025-03-17T17:28:34.805985209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:34.865092 containerd[1575]: time="2025-03-17T17:28:34.864961659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-k7vnn,Uid:969cb279-5454-45a0-901f-c96c19ca75f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\"" Mar 17 17:28:34.867228 kubelet[2786]: E0317 17:28:34.866791 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:35.114361 kubelet[2786]: E0317 17:28:35.114327 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:35.122472 kubelet[2786]: I0317 17:28:35.122403 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d7mnt" podStartSLOduration=1.122384277 podStartE2EDuration="1.122384277s" podCreationTimestamp="2025-03-17 17:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:35.122149265 +0000 UTC m=+15.152391018" watchObservedRunningTime="2025-03-17 17:28:35.122384277 +0000 UTC m=+15.152626030" Mar 17 17:28:36.115016 update_engine[1565]: I20250317 17:28:36.114948 1565 update_attempter.cc:509] Updating boot flags... Mar 17 17:28:36.138339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3101) Mar 17 17:28:36.166061 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3103) Mar 17 17:28:38.219638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961255946.mount: Deactivated successfully. Mar 17 17:28:39.437625 containerd[1575]: time="2025-03-17T17:28:39.437579854Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:39.438769 containerd[1575]: time="2025-03-17T17:28:39.438735901Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:28:39.439612 containerd[1575]: time="2025-03-17T17:28:39.439584691Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:39.441662 containerd[1575]: time="2025-03-17T17:28:39.441613537Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.815081913s" Mar 17 17:28:39.441662 containerd[1575]: time="2025-03-17T17:28:39.441655830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:28:39.445355 containerd[1575]: time="2025-03-17T17:28:39.445307832Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:28:39.451521 containerd[1575]: time="2025-03-17T17:28:39.451486957Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:28:39.480785 containerd[1575]: time="2025-03-17T17:28:39.479965695Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\"" Mar 17 17:28:39.480974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954492214.mount: Deactivated successfully. Mar 17 17:28:39.481518 containerd[1575]: time="2025-03-17T17:28:39.481007506Z" level=info msg="StartContainer for \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\"" Mar 17 17:28:39.533213 containerd[1575]: time="2025-03-17T17:28:39.533167816Z" level=info msg="StartContainer for \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\" returns successfully" Mar 17 17:28:39.760489 containerd[1575]: time="2025-03-17T17:28:39.751405667Z" level=info msg="shim disconnected" id=78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f namespace=k8s.io Mar 17 17:28:39.760489 containerd[1575]: time="2025-03-17T17:28:39.760368798Z" level=warning msg="cleaning up after shim disconnected" id=78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f namespace=k8s.io Mar 17 17:28:39.760489 containerd[1575]: time="2025-03-17T17:28:39.760382082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:40.133717 kubelet[2786]: E0317 17:28:40.133666 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:40.140283 containerd[1575]: time="2025-03-17T17:28:40.140234895Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:28:40.151213 containerd[1575]: time="2025-03-17T17:28:40.151148283Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\"" Mar 17 17:28:40.151790 containerd[1575]: time="2025-03-17T17:28:40.151758468Z" level=info msg="StartContainer for \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\"" Mar 17 17:28:40.202499 containerd[1575]: time="2025-03-17T17:28:40.202458795Z" level=info msg="StartContainer for \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\" returns successfully" Mar 17 17:28:40.218481 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:28:40.218764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:28:40.218827 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:28:40.228403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:28:40.240510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:28:40.246329 containerd[1575]: time="2025-03-17T17:28:40.246265433Z" level=info msg="shim disconnected" id=39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89 namespace=k8s.io Mar 17 17:28:40.246329 containerd[1575]: time="2025-03-17T17:28:40.246320890Z" level=warning msg="cleaning up after shim disconnected" id=39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89 namespace=k8s.io Mar 17 17:28:40.246329 containerd[1575]: time="2025-03-17T17:28:40.246332373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:40.479045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f-rootfs.mount: Deactivated successfully. Mar 17 17:28:41.137241 kubelet[2786]: E0317 17:28:41.137213 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:41.139878 containerd[1575]: time="2025-03-17T17:28:41.139705658Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:28:41.156982 containerd[1575]: time="2025-03-17T17:28:41.156907471Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\"" Mar 17 17:28:41.157725 containerd[1575]: time="2025-03-17T17:28:41.157703261Z" level=info msg="StartContainer for \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\"" Mar 17 17:28:41.213434 containerd[1575]: time="2025-03-17T17:28:41.211543265Z" level=info msg="StartContainer for \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\" returns successfully" Mar 17 17:28:41.251372 containerd[1575]: time="2025-03-17T17:28:41.251309921Z" level=info msg="shim disconnected" id=55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5 namespace=k8s.io Mar 17 17:28:41.251372 containerd[1575]: time="2025-03-17T17:28:41.251363857Z" level=warning msg="cleaning up after shim disconnected" id=55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5 namespace=k8s.io Mar 17 17:28:41.251372 containerd[1575]: time="2025-03-17T17:28:41.251372139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:41.478548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5-rootfs.mount: Deactivated successfully. Mar 17 17:28:41.592111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679114182.mount: Deactivated successfully. Mar 17 17:28:42.140371 kubelet[2786]: E0317 17:28:42.140338 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:42.143678 containerd[1575]: time="2025-03-17T17:28:42.143638928Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:28:42.155112 containerd[1575]: time="2025-03-17T17:28:42.155066642Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\"" Mar 17 17:28:42.155791 containerd[1575]: time="2025-03-17T17:28:42.155765955Z" level=info msg="StartContainer for \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\"" Mar 17 17:28:42.201288 containerd[1575]: time="2025-03-17T17:28:42.201246465Z" level=info msg="StartContainer for \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\" returns successfully" Mar 17 17:28:42.224883 containerd[1575]: time="2025-03-17T17:28:42.224798004Z" level=info msg="shim disconnected" id=7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04 namespace=k8s.io Mar 17 17:28:42.224883 containerd[1575]: time="2025-03-17T17:28:42.224877226Z" level=warning msg="cleaning up after shim disconnected" id=7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04 namespace=k8s.io Mar 17 17:28:42.225102 containerd[1575]: time="2025-03-17T17:28:42.224897111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:42.478643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04-rootfs.mount: Deactivated successfully. Mar 17 17:28:43.064160 containerd[1575]: time="2025-03-17T17:28:43.064096744Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:43.064913 containerd[1575]: time="2025-03-17T17:28:43.064859785Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:28:43.065331 containerd[1575]: time="2025-03-17T17:28:43.065293299Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:28:43.066801 containerd[1575]: time="2025-03-17T17:28:43.066769689Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.62140784s" Mar 17 17:28:43.066839 containerd[1575]: time="2025-03-17T17:28:43.066812860Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:28:43.069028 containerd[1575]: time="2025-03-17T17:28:43.068991434Z" level=info msg="CreateContainer within sandbox \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:28:43.077925 containerd[1575]: time="2025-03-17T17:28:43.077866814Z" level=info msg="CreateContainer within sandbox \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\"" Mar 17 17:28:43.079422 containerd[1575]: time="2025-03-17T17:28:43.078585964Z" level=info msg="StartContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\"" Mar 17 17:28:43.121942 containerd[1575]: time="2025-03-17T17:28:43.119384959Z" level=info msg="StartContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" returns successfully" Mar 17 17:28:43.143902 kubelet[2786]: E0317 17:28:43.143593 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:43.152156 kubelet[2786]: E0317 17:28:43.152131 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:43.154867 kubelet[2786]: I0317 17:28:43.154812 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-k7vnn" podStartSLOduration=0.956103405 podStartE2EDuration="9.154794814s" podCreationTimestamp="2025-03-17 17:28:34 +0000 UTC" firstStartedPulling="2025-03-17 17:28:34.868804471 +0000 UTC m=+14.899046224" lastFinishedPulling="2025-03-17 17:28:43.06749588 +0000 UTC m=+23.097737633" observedRunningTime="2025-03-17 17:28:43.15409631 +0000 UTC m=+23.184338063" watchObservedRunningTime="2025-03-17 17:28:43.154794814 +0000 UTC m=+23.185036567" Mar 17 17:28:43.157644 containerd[1575]: time="2025-03-17T17:28:43.157601194Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:28:43.175175 containerd[1575]: time="2025-03-17T17:28:43.175121573Z" level=info msg="CreateContainer within sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\"" Mar 17 17:28:43.177285 containerd[1575]: time="2025-03-17T17:28:43.177108576Z" level=info msg="StartContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\"" Mar 17 17:28:43.240659 containerd[1575]: time="2025-03-17T17:28:43.240616399Z" level=info msg="StartContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" returns successfully" Mar 17 17:28:43.469810 kubelet[2786]: I0317 17:28:43.469681 2786 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:28:43.502970 kubelet[2786]: I0317 17:28:43.501254 2786 topology_manager.go:215] "Topology Admit Handler" podUID="d2510592-367f-4be0-a582-42db16b60a33" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fxttt" Mar 17 17:28:43.508663 kubelet[2786]: I0317 17:28:43.508396 2786 topology_manager.go:215] "Topology Admit Handler" podUID="9c44141e-1ad3-4edc-ad8a-d62bd57cd16f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gvpxz" Mar 17 17:28:43.632198 kubelet[2786]: I0317 17:28:43.632123 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c44141e-1ad3-4edc-ad8a-d62bd57cd16f-config-volume\") pod \"coredns-7db6d8ff4d-gvpxz\" (UID: \"9c44141e-1ad3-4edc-ad8a-d62bd57cd16f\") " pod="kube-system/coredns-7db6d8ff4d-gvpxz" Mar 17 17:28:43.632198 kubelet[2786]: I0317 17:28:43.632200 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9w6k\" (UniqueName: \"kubernetes.io/projected/d2510592-367f-4be0-a582-42db16b60a33-kube-api-access-q9w6k\") pod \"coredns-7db6d8ff4d-fxttt\" (UID: \"d2510592-367f-4be0-a582-42db16b60a33\") " pod="kube-system/coredns-7db6d8ff4d-fxttt" Mar 17 17:28:43.632430 kubelet[2786]: I0317 17:28:43.632224 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7457\" (UniqueName: \"kubernetes.io/projected/9c44141e-1ad3-4edc-ad8a-d62bd57cd16f-kube-api-access-l7457\") pod \"coredns-7db6d8ff4d-gvpxz\" (UID: \"9c44141e-1ad3-4edc-ad8a-d62bd57cd16f\") " pod="kube-system/coredns-7db6d8ff4d-gvpxz" Mar 17 17:28:43.632430 kubelet[2786]: I0317 17:28:43.632241 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2510592-367f-4be0-a582-42db16b60a33-config-volume\") pod \"coredns-7db6d8ff4d-fxttt\" (UID: \"d2510592-367f-4be0-a582-42db16b60a33\") " pod="kube-system/coredns-7db6d8ff4d-fxttt" Mar 17 17:28:43.817837 kubelet[2786]: E0317 17:28:43.817580 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:43.817837 kubelet[2786]: E0317 17:28:43.817661 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:43.820476 containerd[1575]: time="2025-03-17T17:28:43.819043525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gvpxz,Uid:9c44141e-1ad3-4edc-ad8a-d62bd57cd16f,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:43.821212 containerd[1575]: time="2025-03-17T17:28:43.821174767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fxttt,Uid:d2510592-367f-4be0-a582-42db16b60a33,Namespace:kube-system,Attempt:0,}" Mar 17 17:28:44.159954 kubelet[2786]: E0317 17:28:44.159811 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:44.161353 kubelet[2786]: E0317 17:28:44.161305 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:45.166156 kubelet[2786]: E0317 17:28:45.166120 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:45.669749 systemd-networkd[1237]: cilium_host: Link UP Mar 17 17:28:45.669885 systemd-networkd[1237]: cilium_net: Link UP Mar 17 17:28:45.669888 systemd-networkd[1237]: cilium_net: Gained carrier Mar 17 17:28:45.670084 systemd-networkd[1237]: cilium_host: Gained carrier Mar 17 17:28:45.782430 systemd-networkd[1237]: cilium_vxlan: Link UP Mar 17 17:28:45.782436 systemd-networkd[1237]: cilium_vxlan: Gained carrier Mar 17 17:28:45.785794 systemd-networkd[1237]: cilium_host: Gained IPv6LL Mar 17 17:28:45.921058 systemd-networkd[1237]: cilium_net: Gained IPv6LL Mar 17 17:28:45.997860 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:35094.service - OpenSSH per-connection server daemon (10.0.0.1:35094). Mar 17 17:28:46.042920 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:28:46.044626 sshd-session[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:28:46.053353 systemd-logind[1559]: New session 8 of user core. Mar 17 17:28:46.063365 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:28:46.126048 kernel: NET: Registered PF_ALG protocol family Mar 17 17:28:46.162680 kubelet[2786]: E0317 17:28:46.162606 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:46.200626 sshd[3746]: Connection closed by 10.0.0.1 port 35094 Mar 17 17:28:46.200809 sshd-session[3733]: pam_unix(sshd:session): session closed for user core Mar 17 17:28:46.204350 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:35094.service: Deactivated successfully. Mar 17 17:28:46.206435 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:28:46.206505 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:28:46.208097 systemd-logind[1559]: Removed session 8. Mar 17 17:28:46.726153 systemd-networkd[1237]: lxc_health: Link UP Mar 17 17:28:46.735914 systemd-networkd[1237]: lxc_health: Gained carrier Mar 17 17:28:46.880248 systemd-networkd[1237]: cilium_vxlan: Gained IPv6LL Mar 17 17:28:47.030169 systemd-networkd[1237]: lxcc47c16804d2b: Link UP Mar 17 17:28:47.044875 systemd-networkd[1237]: lxc6805748539c9: Link UP Mar 17 17:28:47.045959 kernel: eth0: renamed from tmp63039 Mar 17 17:28:47.052954 kernel: eth0: renamed from tmp9e78b Mar 17 17:28:47.063336 systemd-networkd[1237]: lxcc47c16804d2b: Gained carrier Mar 17 17:28:47.064142 systemd-networkd[1237]: lxc6805748539c9: Gained carrier Mar 17 17:28:47.419748 kubelet[2786]: E0317 17:28:47.419687 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:48.224074 systemd-networkd[1237]: lxcc47c16804d2b: Gained IPv6LL Mar 17 17:28:48.556445 kubelet[2786]: E0317 17:28:48.554839 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:48.585205 kubelet[2786]: I0317 17:28:48.584323 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lbqtl" podStartSLOduration=9.765455933 podStartE2EDuration="14.584304063s" podCreationTimestamp="2025-03-17 17:28:34 +0000 UTC" firstStartedPulling="2025-03-17 17:28:34.625867713 +0000 UTC m=+14.656109466" lastFinishedPulling="2025-03-17 17:28:39.444715843 +0000 UTC m=+19.474957596" observedRunningTime="2025-03-17 17:28:44.177349865 +0000 UTC m=+24.207591698" watchObservedRunningTime="2025-03-17 17:28:48.584304063 +0000 UTC m=+28.614545816" Mar 17 17:28:48.609132 systemd-networkd[1237]: lxc_health: Gained IPv6LL Mar 17 17:28:48.992078 systemd-networkd[1237]: lxc6805748539c9: Gained IPv6LL Mar 17 17:28:49.167345 kubelet[2786]: E0317 17:28:49.167130 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:50.797804 containerd[1575]: time="2025-03-17T17:28:50.797689573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:50.797804 containerd[1575]: time="2025-03-17T17:28:50.797773190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:50.797804 containerd[1575]: time="2025-03-17T17:28:50.797785232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:50.798650 containerd[1575]: time="2025-03-17T17:28:50.797902815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:50.803870 containerd[1575]: time="2025-03-17T17:28:50.803784571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:28:50.804012 containerd[1575]: time="2025-03-17T17:28:50.803955604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:28:50.804012 containerd[1575]: time="2025-03-17T17:28:50.803990491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:50.804260 containerd[1575]: time="2025-03-17T17:28:50.804151723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:28:50.823750 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:28:50.830140 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:28:50.846620 containerd[1575]: time="2025-03-17T17:28:50.846566175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gvpxz,Uid:9c44141e-1ad3-4edc-ad8a-d62bd57cd16f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e78bcc3b9569b2fd321ee740de3e5607ba1cbc4f50e47c21b54feb831d2de20\"" Mar 17 17:28:50.847690 kubelet[2786]: E0317 17:28:50.847364 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:50.849923 containerd[1575]: time="2025-03-17T17:28:50.849885107Z" level=info msg="CreateContainer within sandbox \"9e78bcc3b9569b2fd321ee740de3e5607ba1cbc4f50e47c21b54feb831d2de20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:28:50.861725 containerd[1575]: time="2025-03-17T17:28:50.861392047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fxttt,Uid:d2510592-367f-4be0-a582-42db16b60a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"6303954eb222f3bf7f3b232347f84d07c6fbdc9107abcf59dadff758af4f5e6b\"" Mar 17 17:28:50.862263 kubelet[2786]: E0317 17:28:50.862238 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:50.864353 containerd[1575]: time="2025-03-17T17:28:50.864223043Z" level=info msg="CreateContainer within sandbox \"6303954eb222f3bf7f3b232347f84d07c6fbdc9107abcf59dadff758af4f5e6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:28:50.866986 containerd[1575]: time="2025-03-17T17:28:50.866626795Z" level=info msg="CreateContainer within sandbox \"9e78bcc3b9569b2fd321ee740de3e5607ba1cbc4f50e47c21b54feb831d2de20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47ce80d11b360346fa46c7a349f3e146f9bc8ec99fd5c7df8d2c351beb0fe0b1\"" Mar 17 17:28:50.868189 containerd[1575]: time="2025-03-17T17:28:50.867316651Z" level=info msg="StartContainer for \"47ce80d11b360346fa46c7a349f3e146f9bc8ec99fd5c7df8d2c351beb0fe0b1\"" Mar 17 17:28:50.877770 containerd[1575]: time="2025-03-17T17:28:50.877715614Z" level=info msg="CreateContainer within sandbox \"6303954eb222f3bf7f3b232347f84d07c6fbdc9107abcf59dadff758af4f5e6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64f86b4c8a7f062565afe70a13c6afe37afb62830779a0f2b3ee1d89a07ef208\"" Mar 17 17:28:50.879731 containerd[1575]: time="2025-03-17T17:28:50.878349218Z" level=info msg="StartContainer for \"64f86b4c8a7f062565afe70a13c6afe37afb62830779a0f2b3ee1d89a07ef208\"" Mar 17 17:28:50.923822 containerd[1575]: time="2025-03-17T17:28:50.923779302Z" level=info msg="StartContainer for \"47ce80d11b360346fa46c7a349f3e146f9bc8ec99fd5c7df8d2c351beb0fe0b1\" returns successfully" Mar 17 17:28:50.953717 containerd[1575]: time="2025-03-17T17:28:50.953673094Z" level=info msg="StartContainer for \"64f86b4c8a7f062565afe70a13c6afe37afb62830779a0f2b3ee1d89a07ef208\" returns successfully" Mar 17 17:28:51.174565 kubelet[2786]: E0317 17:28:51.174517 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:51.186975 kubelet[2786]: E0317 17:28:51.186769 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:51.187226 kubelet[2786]: I0317 17:28:51.187175 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gvpxz" podStartSLOduration=17.187159511 podStartE2EDuration="17.187159511s" podCreationTimestamp="2025-03-17 17:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:51.184832351 +0000 UTC m=+31.215074104" watchObservedRunningTime="2025-03-17 17:28:51.187159511 +0000 UTC m=+31.217401384" Mar 17 17:28:51.212759 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:35104.service - OpenSSH per-connection server daemon (10.0.0.1:35104). Mar 17 17:28:51.214740 kubelet[2786]: I0317 17:28:51.212783 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fxttt" podStartSLOduration=17.212761232 podStartE2EDuration="17.212761232s" podCreationTimestamp="2025-03-17 17:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:28:51.211888347 +0000 UTC m=+31.242130100" watchObservedRunningTime="2025-03-17 17:28:51.212761232 +0000 UTC m=+31.243002985" Mar 17 17:28:51.265688 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 35104 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:28:51.267050 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:28:51.271674 systemd-logind[1559]: New session 9 of user core. Mar 17 17:28:51.280348 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:28:51.403478 sshd[4217]: Connection closed by 10.0.0.1 port 35104 Mar 17 17:28:51.404067 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Mar 17 17:28:51.406555 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:35104.service: Deactivated successfully. Mar 17 17:28:51.409680 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:28:51.410836 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:28:51.411833 systemd-logind[1559]: Removed session 9. Mar 17 17:28:52.188731 kubelet[2786]: E0317 17:28:52.188518 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:52.188731 kubelet[2786]: E0317 17:28:52.188558 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:53.190088 kubelet[2786]: E0317 17:28:53.190055 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:53.193583 kubelet[2786]: E0317 17:28:53.190066 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:28:56.420247 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:41102.service - OpenSSH per-connection server daemon (10.0.0.1:41102). Mar 17 17:28:56.462902 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:28:56.464344 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:28:56.468094 systemd-logind[1559]: New session 10 of user core. Mar 17 17:28:56.476241 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:28:56.592105 sshd[4234]: Connection closed by 10.0.0.1 port 41102 Mar 17 17:28:56.592456 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Mar 17 17:28:56.595399 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:41102.service: Deactivated successfully. Mar 17 17:28:56.599093 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:28:56.599696 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:28:56.601508 systemd-logind[1559]: Removed session 10. Mar 17 17:29:01.607232 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:41116.service - OpenSSH per-connection server daemon (10.0.0.1:41116). Mar 17 17:29:01.653330 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 41116 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:01.654708 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:01.663119 systemd-logind[1559]: New session 11 of user core. Mar 17 17:29:01.672220 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:29:01.791742 sshd[4250]: Connection closed by 10.0.0.1 port 41116 Mar 17 17:29:01.793975 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:01.798383 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:41116.service: Deactivated successfully. Mar 17 17:29:01.801408 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:29:01.801610 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:29:01.804068 systemd-logind[1559]: Removed session 11. Mar 17 17:29:06.811229 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:43332.service - OpenSSH per-connection server daemon (10.0.0.1:43332). Mar 17 17:29:06.851364 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 43332 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:06.852656 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:06.856831 systemd-logind[1559]: New session 12 of user core. Mar 17 17:29:06.867193 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:29:06.974599 sshd[4270]: Connection closed by 10.0.0.1 port 43332 Mar 17 17:29:06.975083 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:06.987160 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:43342.service - OpenSSH per-connection server daemon (10.0.0.1:43342). Mar 17 17:29:06.987541 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:43332.service: Deactivated successfully. Mar 17 17:29:06.989997 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:29:06.991043 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:29:06.992153 systemd-logind[1559]: Removed session 12. Mar 17 17:29:07.025924 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 43342 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:07.027139 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:07.031089 systemd-logind[1559]: New session 13 of user core. Mar 17 17:29:07.043261 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:29:07.204922 sshd[4287]: Connection closed by 10.0.0.1 port 43342 Mar 17 17:29:07.205339 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:07.217959 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:43356.service - OpenSSH per-connection server daemon (10.0.0.1:43356). Mar 17 17:29:07.218429 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:43342.service: Deactivated successfully. Mar 17 17:29:07.224856 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:29:07.227841 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:29:07.228884 systemd-logind[1559]: Removed session 13. Mar 17 17:29:07.271272 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 43356 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:07.272693 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:07.276854 systemd-logind[1559]: New session 14 of user core. Mar 17 17:29:07.288264 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:29:07.408308 sshd[4301]: Connection closed by 10.0.0.1 port 43356 Mar 17 17:29:07.408664 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:07.412218 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:43356.service: Deactivated successfully. Mar 17 17:29:07.416228 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:29:07.417126 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:29:07.418299 systemd-logind[1559]: Removed session 14. Mar 17 17:29:12.420156 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:43358.service - OpenSSH per-connection server daemon (10.0.0.1:43358). Mar 17 17:29:12.458978 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 43358 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:12.460187 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:12.464001 systemd-logind[1559]: New session 15 of user core. Mar 17 17:29:12.471210 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:29:12.582076 sshd[4317]: Connection closed by 10.0.0.1 port 43358 Mar 17 17:29:12.582644 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:12.586052 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:43358.service: Deactivated successfully. Mar 17 17:29:12.588173 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:29:12.588661 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:29:12.589993 systemd-logind[1559]: Removed session 15. Mar 17 17:29:17.594153 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:40762.service - OpenSSH per-connection server daemon (10.0.0.1:40762). Mar 17 17:29:17.636588 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 40762 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:17.638246 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:17.642175 systemd-logind[1559]: New session 16 of user core. Mar 17 17:29:17.651187 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:29:17.776182 sshd[4332]: Connection closed by 10.0.0.1 port 40762 Mar 17 17:29:17.776775 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:17.784372 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Mar 17 17:29:17.784838 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:40762.service: Deactivated successfully. Mar 17 17:29:17.787474 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:29:17.788863 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:29:17.789729 systemd-logind[1559]: Removed session 16. Mar 17 17:29:17.829419 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:17.830606 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:17.834817 systemd-logind[1559]: New session 17 of user core. Mar 17 17:29:17.843281 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:29:18.046172 sshd[4347]: Connection closed by 10.0.0.1 port 40778 Mar 17 17:29:18.046557 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:18.058274 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:40788.service - OpenSSH per-connection server daemon (10.0.0.1:40788). Mar 17 17:29:18.058646 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:40778.service: Deactivated successfully. Mar 17 17:29:18.061266 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:29:18.062114 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:29:18.063062 systemd-logind[1559]: Removed session 17. Mar 17 17:29:18.106045 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 40788 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:18.107527 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:18.111743 systemd-logind[1559]: New session 18 of user core. Mar 17 17:29:18.119177 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:29:19.423645 sshd[4360]: Connection closed by 10.0.0.1 port 40788 Mar 17 17:29:19.424319 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:19.437634 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:40802.service - OpenSSH per-connection server daemon (10.0.0.1:40802). Mar 17 17:29:19.438043 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:40788.service: Deactivated successfully. Mar 17 17:29:19.442733 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:29:19.447051 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:29:19.450884 systemd-logind[1559]: Removed session 18. Mar 17 17:29:19.489875 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 40802 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:19.491225 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:19.495302 systemd-logind[1559]: New session 19 of user core. Mar 17 17:29:19.507265 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:29:19.716050 sshd[4385]: Connection closed by 10.0.0.1 port 40802 Mar 17 17:29:19.716764 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:19.729246 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:40808.service - OpenSSH per-connection server daemon (10.0.0.1:40808). Mar 17 17:29:19.729649 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:40802.service: Deactivated successfully. Mar 17 17:29:19.732160 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:29:19.733317 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:29:19.734242 systemd-logind[1559]: Removed session 19. Mar 17 17:29:19.771410 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 40808 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:19.772832 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:19.777541 systemd-logind[1559]: New session 20 of user core. Mar 17 17:29:19.784201 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:29:19.900292 sshd[4399]: Connection closed by 10.0.0.1 port 40808 Mar 17 17:29:19.900607 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:19.903815 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:40808.service: Deactivated successfully. Mar 17 17:29:19.905833 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:29:19.905858 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:29:19.907108 systemd-logind[1559]: Removed session 20. Mar 17 17:29:24.912251 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Mar 17 17:29:24.961388 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:24.962775 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:24.967007 systemd-logind[1559]: New session 21 of user core. Mar 17 17:29:24.982289 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:29:25.104363 sshd[4419]: Connection closed by 10.0.0.1 port 55176 Mar 17 17:29:25.104739 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:25.120120 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:55176.service: Deactivated successfully. Mar 17 17:29:25.122652 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:29:25.123543 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:29:25.124610 systemd-logind[1559]: Removed session 21. Mar 17 17:29:30.121384 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:55180.service - OpenSSH per-connection server daemon (10.0.0.1:55180). Mar 17 17:29:30.168143 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 55180 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:30.168648 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:30.173639 systemd-logind[1559]: New session 22 of user core. Mar 17 17:29:30.182301 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:29:30.318978 sshd[4434]: Connection closed by 10.0.0.1 port 55180 Mar 17 17:29:30.318651 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:30.322748 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:55180.service: Deactivated successfully. Mar 17 17:29:30.328183 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:29:30.332030 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:29:30.333482 systemd-logind[1559]: Removed session 22. Mar 17 17:29:35.329249 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:56688.service - OpenSSH per-connection server daemon (10.0.0.1:56688). Mar 17 17:29:35.376206 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 56688 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:35.377667 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:35.384445 systemd-logind[1559]: New session 23 of user core. Mar 17 17:29:35.396248 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:29:35.520983 sshd[4452]: Connection closed by 10.0.0.1 port 56688 Mar 17 17:29:35.521423 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:35.540227 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:56704.service - OpenSSH per-connection server daemon (10.0.0.1:56704). Mar 17 17:29:35.540614 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:56688.service: Deactivated successfully. Mar 17 17:29:35.545486 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:29:35.545635 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:29:35.547312 systemd-logind[1559]: Removed session 23. Mar 17 17:29:35.586175 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 56704 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:35.587372 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:35.597791 systemd-logind[1559]: New session 24 of user core. Mar 17 17:29:35.607231 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:29:36.075035 kubelet[2786]: E0317 17:29:36.074998 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:38.791474 containerd[1575]: time="2025-03-17T17:29:38.789550937Z" level=info msg="StopContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" with timeout 30 (s)" Mar 17 17:29:38.791474 containerd[1575]: time="2025-03-17T17:29:38.790074832Z" level=info msg="Stop container \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" with signal terminated" Mar 17 17:29:38.824896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247-rootfs.mount: Deactivated successfully. Mar 17 17:29:38.830566 containerd[1575]: time="2025-03-17T17:29:38.830521465Z" level=info msg="StopContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" with timeout 2 (s)" Mar 17 17:29:38.831087 containerd[1575]: time="2025-03-17T17:29:38.830905406Z" level=info msg="Stop container \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" with signal terminated" Mar 17 17:29:38.832089 containerd[1575]: time="2025-03-17T17:29:38.831629212Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:29:38.841475 systemd-networkd[1237]: lxc_health: Link DOWN Mar 17 17:29:38.841485 systemd-networkd[1237]: lxc_health: Lost carrier Mar 17 17:29:38.845170 containerd[1575]: time="2025-03-17T17:29:38.844957857Z" level=info msg="shim disconnected" id=6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247 namespace=k8s.io Mar 17 17:29:38.845170 containerd[1575]: time="2025-03-17T17:29:38.845015094Z" level=warning msg="cleaning up after shim disconnected" id=6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247 namespace=k8s.io Mar 17 17:29:38.845170 containerd[1575]: time="2025-03-17T17:29:38.845024093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:38.883193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6-rootfs.mount: Deactivated successfully. Mar 17 17:29:38.887521 containerd[1575]: time="2025-03-17T17:29:38.887452391Z" level=info msg="shim disconnected" id=d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6 namespace=k8s.io Mar 17 17:29:38.887521 containerd[1575]: time="2025-03-17T17:29:38.887519228Z" level=warning msg="cleaning up after shim disconnected" id=d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6 namespace=k8s.io Mar 17 17:29:38.887521 containerd[1575]: time="2025-03-17T17:29:38.887528948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:38.896670 containerd[1575]: time="2025-03-17T17:29:38.896626354Z" level=info msg="StopContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" returns successfully" Mar 17 17:29:38.901015 containerd[1575]: time="2025-03-17T17:29:38.900881391Z" level=info msg="StopPodSandbox for \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\"" Mar 17 17:29:38.906134 containerd[1575]: time="2025-03-17T17:29:38.903541785Z" level=info msg="StopContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" returns successfully" Mar 17 17:29:38.906134 containerd[1575]: time="2025-03-17T17:29:38.904011202Z" level=info msg="StopPodSandbox for \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\"" Mar 17 17:29:38.906275 containerd[1575]: time="2025-03-17T17:29:38.906202978Z" level=info msg="Container to stop \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.907017 containerd[1575]: time="2025-03-17T17:29:38.906988260Z" level=info msg="Container to stop \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.907017 containerd[1575]: time="2025-03-17T17:29:38.907012219Z" level=info msg="Container to stop \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.907083 containerd[1575]: time="2025-03-17T17:29:38.907021899Z" level=info msg="Container to stop \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.907083 containerd[1575]: time="2025-03-17T17:29:38.907030578Z" level=info msg="Container to stop \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.907083 containerd[1575]: time="2025-03-17T17:29:38.907039978Z" level=info msg="Container to stop \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:29:38.908070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad-shm.mount: Deactivated successfully. Mar 17 17:29:38.910804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068-shm.mount: Deactivated successfully. Mar 17 17:29:38.945211 containerd[1575]: time="2025-03-17T17:29:38.945103444Z" level=info msg="shim disconnected" id=37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad namespace=k8s.io Mar 17 17:29:38.945211 containerd[1575]: time="2025-03-17T17:29:38.945200999Z" level=warning msg="cleaning up after shim disconnected" id=37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad namespace=k8s.io Mar 17 17:29:38.945211 containerd[1575]: time="2025-03-17T17:29:38.945212359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:38.945895 containerd[1575]: time="2025-03-17T17:29:38.945852408Z" level=info msg="shim disconnected" id=13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068 namespace=k8s.io Mar 17 17:29:38.945895 containerd[1575]: time="2025-03-17T17:29:38.945892286Z" level=warning msg="cleaning up after shim disconnected" id=13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068 namespace=k8s.io Mar 17 17:29:38.945895 containerd[1575]: time="2025-03-17T17:29:38.945902886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:38.964599 containerd[1575]: time="2025-03-17T17:29:38.964553437Z" level=info msg="TearDown network for sandbox \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\" successfully" Mar 17 17:29:38.964599 containerd[1575]: time="2025-03-17T17:29:38.964589715Z" level=info msg="StopPodSandbox for \"37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad\" returns successfully" Mar 17 17:29:38.967045 containerd[1575]: time="2025-03-17T17:29:38.967017160Z" level=info msg="TearDown network for sandbox \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" successfully" Mar 17 17:29:38.967182 containerd[1575]: time="2025-03-17T17:29:38.967045838Z" level=info msg="StopPodSandbox for \"13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068\" returns successfully" Mar 17 17:29:39.163239 kubelet[2786]: I0317 17:29:39.163181 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-etc-cni-netd\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163239 kubelet[2786]: I0317 17:29:39.163230 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-net\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163256 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc5gt\" (UniqueName: \"kubernetes.io/projected/969cb279-5454-45a0-901f-c96c19ca75f7-kube-api-access-bc5gt\") pod \"969cb279-5454-45a0-901f-c96c19ca75f7\" (UID: \"969cb279-5454-45a0-901f-c96c19ca75f7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163275 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55pg\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-kube-api-access-k55pg\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163293 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/969cb279-5454-45a0-901f-c96c19ca75f7-cilium-config-path\") pod \"969cb279-5454-45a0-901f-c96c19ca75f7\" (UID: \"969cb279-5454-45a0-901f-c96c19ca75f7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163380 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-cgroup\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163394 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-bpf-maps\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163683 kubelet[2786]: I0317 17:29:39.163411 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-kernel\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163425 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cni-path\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163438 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-lib-modules\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163465 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-config-path\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163486 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c708f516-f52e-49f7-a1be-3d5d226647a7-clustermesh-secrets\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163500 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-xtables-lock\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163817 kubelet[2786]: I0317 17:29:39.163517 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-hubble-tls\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163977 kubelet[2786]: I0317 17:29:39.163532 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-run\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.163977 kubelet[2786]: I0317 17:29:39.163548 2786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-hostproc\") pod \"c708f516-f52e-49f7-a1be-3d5d226647a7\" (UID: \"c708f516-f52e-49f7-a1be-3d5d226647a7\") " Mar 17 17:29:39.168310 kubelet[2786]: I0317 17:29:39.168271 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.168347 kubelet[2786]: I0317 17:29:39.168314 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.168347 kubelet[2786]: I0317 17:29:39.168276 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.168347 kubelet[2786]: I0317 17:29:39.168344 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.169968 kubelet[2786]: I0317 17:29:39.169812 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.169968 kubelet[2786]: I0317 17:29:39.169862 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.169968 kubelet[2786]: I0317 17:29:39.169881 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.171694 kubelet[2786]: I0317 17:29:39.171579 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/969cb279-5454-45a0-901f-c96c19ca75f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "969cb279-5454-45a0-901f-c96c19ca75f7" (UID: "969cb279-5454-45a0-901f-c96c19ca75f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:29:39.172854 kubelet[2786]: I0317 17:29:39.172209 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969cb279-5454-45a0-901f-c96c19ca75f7-kube-api-access-bc5gt" (OuterVolumeSpecName: "kube-api-access-bc5gt") pod "969cb279-5454-45a0-901f-c96c19ca75f7" (UID: "969cb279-5454-45a0-901f-c96c19ca75f7"). InnerVolumeSpecName "kube-api-access-bc5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:29:39.172854 kubelet[2786]: I0317 17:29:39.172259 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.172854 kubelet[2786]: I0317 17:29:39.172276 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.174183 kubelet[2786]: I0317 17:29:39.174154 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:29:39.174396 kubelet[2786]: I0317 17:29:39.174366 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:29:39.174621 kubelet[2786]: I0317 17:29:39.174601 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-kube-api-access-k55pg" (OuterVolumeSpecName: "kube-api-access-k55pg") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "kube-api-access-k55pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:29:39.174737 kubelet[2786]: I0317 17:29:39.174696 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c708f516-f52e-49f7-a1be-3d5d226647a7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:29:39.179505 kubelet[2786]: I0317 17:29:39.179468 2786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c708f516-f52e-49f7-a1be-3d5d226647a7" (UID: "c708f516-f52e-49f7-a1be-3d5d226647a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:29:39.263845 kubelet[2786]: I0317 17:29:39.263797 2786 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.263845 kubelet[2786]: I0317 17:29:39.263836 2786 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.263845 kubelet[2786]: I0317 17:29:39.263848 2786 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c708f516-f52e-49f7-a1be-3d5d226647a7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.263845 kubelet[2786]: I0317 17:29:39.263856 2786 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263866 2786 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263874 2786 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263882 2786 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263890 2786 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263897 2786 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263905 2786 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bc5gt\" (UniqueName: \"kubernetes.io/projected/969cb279-5454-45a0-901f-c96c19ca75f7-kube-api-access-bc5gt\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263914 2786 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k55pg\" (UniqueName: \"kubernetes.io/projected/c708f516-f52e-49f7-a1be-3d5d226647a7-kube-api-access-k55pg\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264073 kubelet[2786]: I0317 17:29:39.263922 2786 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/969cb279-5454-45a0-901f-c96c19ca75f7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264232 kubelet[2786]: I0317 17:29:39.263950 2786 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264232 kubelet[2786]: I0317 17:29:39.263959 2786 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264232 kubelet[2786]: I0317 17:29:39.263966 2786 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.264232 kubelet[2786]: I0317 17:29:39.263974 2786 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c708f516-f52e-49f7-a1be-3d5d226647a7-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:29:39.285921 kubelet[2786]: I0317 17:29:39.285878 2786 scope.go:117] "RemoveContainer" containerID="6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247" Mar 17 17:29:39.289122 containerd[1575]: time="2025-03-17T17:29:39.288706159Z" level=info msg="RemoveContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\"" Mar 17 17:29:39.295723 containerd[1575]: time="2025-03-17T17:29:39.295674770Z" level=info msg="RemoveContainer for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" returns successfully" Mar 17 17:29:39.296101 kubelet[2786]: I0317 17:29:39.296065 2786 scope.go:117] "RemoveContainer" containerID="6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247" Mar 17 17:29:39.296321 containerd[1575]: time="2025-03-17T17:29:39.296281744Z" level=error msg="ContainerStatus for \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\": not found" Mar 17 17:29:39.302350 kubelet[2786]: E0317 17:29:39.302308 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\": not found" containerID="6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247" Mar 17 17:29:39.302455 kubelet[2786]: I0317 17:29:39.302358 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247"} err="failed to get container status \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e3a4952ecc32fb5d6af685df810131e1e10e7af943910e5cdae367badb76247\": not found" Mar 17 17:29:39.302514 kubelet[2786]: I0317 17:29:39.302458 2786 scope.go:117] "RemoveContainer" containerID="d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6" Mar 17 17:29:39.309806 containerd[1575]: time="2025-03-17T17:29:39.309769706Z" level=info msg="RemoveContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\"" Mar 17 17:29:39.315125 containerd[1575]: time="2025-03-17T17:29:39.314993195Z" level=info msg="RemoveContainer for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" returns successfully" Mar 17 17:29:39.316234 kubelet[2786]: I0317 17:29:39.316206 2786 scope.go:117] "RemoveContainer" containerID="7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04" Mar 17 17:29:39.317365 containerd[1575]: time="2025-03-17T17:29:39.317214577Z" level=info msg="RemoveContainer for \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\"" Mar 17 17:29:39.334260 containerd[1575]: time="2025-03-17T17:29:39.334171826Z" level=info msg="RemoveContainer for \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\" returns successfully" Mar 17 17:29:39.334493 kubelet[2786]: I0317 17:29:39.334433 2786 scope.go:117] "RemoveContainer" containerID="55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5" Mar 17 17:29:39.335525 containerd[1575]: time="2025-03-17T17:29:39.335479288Z" level=info msg="RemoveContainer for \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\"" Mar 17 17:29:39.350217 containerd[1575]: time="2025-03-17T17:29:39.350153878Z" level=info msg="RemoveContainer for \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\" returns successfully" Mar 17 17:29:39.350964 kubelet[2786]: I0317 17:29:39.350370 2786 scope.go:117] "RemoveContainer" containerID="39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89" Mar 17 17:29:39.352205 containerd[1575]: time="2025-03-17T17:29:39.351918880Z" level=info msg="RemoveContainer for \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\"" Mar 17 17:29:39.354744 containerd[1575]: time="2025-03-17T17:29:39.354709556Z" level=info msg="RemoveContainer for \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\" returns successfully" Mar 17 17:29:39.355071 kubelet[2786]: I0317 17:29:39.355046 2786 scope.go:117] "RemoveContainer" containerID="78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f" Mar 17 17:29:39.356205 containerd[1575]: time="2025-03-17T17:29:39.355976260Z" level=info msg="RemoveContainer for \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\"" Mar 17 17:29:39.358646 containerd[1575]: time="2025-03-17T17:29:39.358603784Z" level=info msg="RemoveContainer for \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\" returns successfully" Mar 17 17:29:39.358968 kubelet[2786]: I0317 17:29:39.358944 2786 scope.go:117] "RemoveContainer" containerID="d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6" Mar 17 17:29:39.359251 containerd[1575]: time="2025-03-17T17:29:39.359173318Z" level=error msg="ContainerStatus for \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\": not found" Mar 17 17:29:39.359314 kubelet[2786]: E0317 17:29:39.359290 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\": not found" containerID="d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6" Mar 17 17:29:39.359378 kubelet[2786]: I0317 17:29:39.359319 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6"} err="failed to get container status \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0847d9f1615cc5595c37d82a0a0adc0536519680cefbd42b2c1b79ab38307c6\": not found" Mar 17 17:29:39.359378 kubelet[2786]: I0317 17:29:39.359337 2786 scope.go:117] "RemoveContainer" containerID="7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04" Mar 17 17:29:39.359553 containerd[1575]: time="2025-03-17T17:29:39.359520183Z" level=error msg="ContainerStatus for \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\": not found" Mar 17 17:29:39.359652 kubelet[2786]: E0317 17:29:39.359635 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\": not found" containerID="7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04" Mar 17 17:29:39.359703 kubelet[2786]: I0317 17:29:39.359656 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04"} err="failed to get container status \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\": rpc error: code = NotFound desc = an error occurred when try to find container \"7de91bc1f1efa4f01275d2c077ade2a74e05d3eb2849910c95656e0efe379e04\": not found" Mar 17 17:29:39.359703 kubelet[2786]: I0317 17:29:39.359692 2786 scope.go:117] "RemoveContainer" containerID="55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5" Mar 17 17:29:39.360008 containerd[1575]: time="2025-03-17T17:29:39.359886327Z" level=error msg="ContainerStatus for \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\": not found" Mar 17 17:29:39.360045 kubelet[2786]: E0317 17:29:39.360011 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\": not found" containerID="55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5" Mar 17 17:29:39.360045 kubelet[2786]: I0317 17:29:39.360031 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5"} err="failed to get container status \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"55c14d40278d32554f0ceef5fb0747ee217af7c4b3e1781dc015aa210e45f2e5\": not found" Mar 17 17:29:39.360045 kubelet[2786]: I0317 17:29:39.360047 2786 scope.go:117] "RemoveContainer" containerID="39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89" Mar 17 17:29:39.360544 containerd[1575]: time="2025-03-17T17:29:39.360431263Z" level=error msg="ContainerStatus for \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\": not found" Mar 17 17:29:39.360644 kubelet[2786]: E0317 17:29:39.360586 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\": not found" containerID="39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89" Mar 17 17:29:39.360644 kubelet[2786]: I0317 17:29:39.360609 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89"} err="failed to get container status \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\": rpc error: code = NotFound desc = an error occurred when try to find container \"39e784c5c8221f01f5ae771e6535ae464e39415912d14ecc6709ac52b0066d89\": not found" Mar 17 17:29:39.360644 kubelet[2786]: I0317 17:29:39.360626 2786 scope.go:117] "RemoveContainer" containerID="78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f" Mar 17 17:29:39.361031 containerd[1575]: time="2025-03-17T17:29:39.360990118Z" level=error msg="ContainerStatus for \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\": not found" Mar 17 17:29:39.361145 kubelet[2786]: E0317 17:29:39.361102 2786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\": not found" containerID="78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f" Mar 17 17:29:39.361191 kubelet[2786]: I0317 17:29:39.361150 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f"} err="failed to get container status \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"78fd233d4361e61d70a4ecbd04ce07de9564476af9566206de1fcc0d7ffe3b1f\": not found" Mar 17 17:29:39.811920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37e44abf22946167e1336954b3a681575b542a2cfb6998b07d4feb26a586c3ad-rootfs.mount: Deactivated successfully. Mar 17 17:29:39.812081 systemd[1]: var-lib-kubelet-pods-969cb279\x2d5454\x2d45a0\x2d901f\x2dc96c19ca75f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc5gt.mount: Deactivated successfully. Mar 17 17:29:39.812168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ac97c9a38e44949e3342ba4600809f6be8e01df0a084ce21fced6f2ab37068-rootfs.mount: Deactivated successfully. Mar 17 17:29:39.812249 systemd[1]: var-lib-kubelet-pods-c708f516\x2df52e\x2d49f7\x2da1be\x2d3d5d226647a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk55pg.mount: Deactivated successfully. Mar 17 17:29:39.812326 systemd[1]: var-lib-kubelet-pods-c708f516\x2df52e\x2d49f7\x2da1be\x2d3d5d226647a7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:29:39.812409 systemd[1]: var-lib-kubelet-pods-c708f516\x2df52e\x2d49f7\x2da1be\x2d3d5d226647a7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:29:40.076338 kubelet[2786]: I0317 17:29:40.076235 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969cb279-5454-45a0-901f-c96c19ca75f7" path="/var/lib/kubelet/pods/969cb279-5454-45a0-901f-c96c19ca75f7/volumes" Mar 17 17:29:40.076671 kubelet[2786]: I0317 17:29:40.076640 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" path="/var/lib/kubelet/pods/c708f516-f52e-49f7-a1be-3d5d226647a7/volumes" Mar 17 17:29:40.140914 kubelet[2786]: E0317 17:29:40.140650 2786 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:29:40.744871 sshd[4467]: Connection closed by 10.0.0.1 port 56704 Mar 17 17:29:40.745379 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:40.763255 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Mar 17 17:29:40.763672 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:56704.service: Deactivated successfully. Mar 17 17:29:40.768291 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:29:40.770136 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:29:40.771821 systemd-logind[1559]: Removed session 24. Mar 17 17:29:40.805553 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:40.806773 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:40.810656 systemd-logind[1559]: New session 25 of user core. Mar 17 17:29:40.820298 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:29:41.649976 kubelet[2786]: I0317 17:29:41.649472 2786 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:29:41Z","lastTransitionTime":"2025-03-17T17:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:29:41.686681 sshd[4630]: Connection closed by 10.0.0.1 port 56714 Mar 17 17:29:41.691384 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:41.705817 systemd[1]: Started sshd@25-10.0.0.72:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Mar 17 17:29:41.706434 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:56714.service: Deactivated successfully. Mar 17 17:29:41.714807 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:29:41.718524 kubelet[2786]: I0317 17:29:41.718479 2786 topology_manager.go:215] "Topology Admit Handler" podUID="8c236958-a2c1-4e5c-be6e-a0d4225eb58e" podNamespace="kube-system" podName="cilium-qrrxc" Mar 17 17:29:41.718628 kubelet[2786]: E0317 17:29:41.718612 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="apply-sysctl-overwrites" Mar 17 17:29:41.718628 kubelet[2786]: E0317 17:29:41.718626 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="mount-bpf-fs" Mar 17 17:29:41.718698 kubelet[2786]: E0317 17:29:41.718633 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="clean-cilium-state" Mar 17 17:29:41.718698 kubelet[2786]: E0317 17:29:41.718639 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="mount-cgroup" Mar 17 17:29:41.718698 kubelet[2786]: E0317 17:29:41.718646 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="969cb279-5454-45a0-901f-c96c19ca75f7" containerName="cilium-operator" Mar 17 17:29:41.718698 kubelet[2786]: E0317 17:29:41.718651 2786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="cilium-agent" Mar 17 17:29:41.718698 kubelet[2786]: I0317 17:29:41.718673 2786 memory_manager.go:354] "RemoveStaleState removing state" podUID="969cb279-5454-45a0-901f-c96c19ca75f7" containerName="cilium-operator" Mar 17 17:29:41.718698 kubelet[2786]: I0317 17:29:41.718679 2786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c708f516-f52e-49f7-a1be-3d5d226647a7" containerName="cilium-agent" Mar 17 17:29:41.724003 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:29:41.730121 systemd-logind[1559]: Removed session 25. Mar 17 17:29:41.731865 kubelet[2786]: W0317 17:29:41.730569 2786 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.731865 kubelet[2786]: W0317 17:29:41.730552 2786 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.731865 kubelet[2786]: E0317 17:29:41.730833 2786 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.732078 kubelet[2786]: W0317 17:29:41.732027 2786 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.732078 kubelet[2786]: E0317 17:29:41.732072 2786 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.749921 kubelet[2786]: E0317 17:29:41.749871 2786 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:29:41.760657 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:41.762098 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:41.767067 systemd-logind[1559]: New session 26 of user core. Mar 17 17:29:41.773348 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:29:41.824082 sshd[4644]: Connection closed by 10.0.0.1 port 56730 Mar 17 17:29:41.824430 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:41.831176 systemd[1]: Started sshd@26-10.0.0.72:22-10.0.0.1:56736.service - OpenSSH per-connection server daemon (10.0.0.1:56736). Mar 17 17:29:41.831587 systemd[1]: sshd@25-10.0.0.72:22-10.0.0.1:56730.service: Deactivated successfully. Mar 17 17:29:41.834569 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:29:41.835187 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:29:41.836714 systemd-logind[1559]: Removed session 26. Mar 17 17:29:41.870854 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 56736 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:29:41.872230 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:41.876003 kubelet[2786]: I0317 17:29:41.875832 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-cgroup\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876003 kubelet[2786]: I0317 17:29:41.875870 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-config-path\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876003 kubelet[2786]: I0317 17:29:41.875944 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-host-proc-sys-kernel\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876003 kubelet[2786]: I0317 17:29:41.875990 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-etc-cni-netd\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876003 kubelet[2786]: I0317 17:29:41.876008 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-xtables-lock\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876024 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-run\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876041 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-lib-modules\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876055 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-hubble-tls\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876073 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-bpf-maps\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876088 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6s9r\" (UniqueName: \"kubernetes.io/projected/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-kube-api-access-r6s9r\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876181 kubelet[2786]: I0317 17:29:41.876106 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-clustermesh-secrets\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876344 kubelet[2786]: I0317 17:29:41.876125 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-hostproc\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876344 kubelet[2786]: I0317 17:29:41.876140 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cni-path\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876344 kubelet[2786]: I0317 17:29:41.876155 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-host-proc-sys-net\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.876344 kubelet[2786]: I0317 17:29:41.876174 2786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-ipsec-secrets\") pod \"cilium-qrrxc\" (UID: \"8c236958-a2c1-4e5c-be6e-a0d4225eb58e\") " pod="kube-system/cilium-qrrxc" Mar 17 17:29:41.877003 systemd-logind[1559]: New session 27 of user core. Mar 17 17:29:41.886209 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:29:42.979827 kubelet[2786]: E0317 17:29:42.979795 2786 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 17:29:42.979827 kubelet[2786]: E0317 17:29:42.979815 2786 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 17 17:29:42.980252 kubelet[2786]: E0317 17:29:42.979881 2786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-clustermesh-secrets podName:8c236958-a2c1-4e5c-be6e-a0d4225eb58e nodeName:}" failed. No retries permitted until 2025-03-17 17:29:43.479860371 +0000 UTC m=+83.510102084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-clustermesh-secrets") pod "cilium-qrrxc" (UID: "8c236958-a2c1-4e5c-be6e-a0d4225eb58e") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:29:42.980252 kubelet[2786]: E0317 17:29:42.979897 2786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-ipsec-secrets podName:8c236958-a2c1-4e5c-be6e-a0d4225eb58e nodeName:}" failed. No retries permitted until 2025-03-17 17:29:43.47989085 +0000 UTC m=+83.510132603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/8c236958-a2c1-4e5c-be6e-a0d4225eb58e-cilium-ipsec-secrets") pod "cilium-qrrxc" (UID: "8c236958-a2c1-4e5c-be6e-a0d4225eb58e") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:29:43.530066 kubelet[2786]: E0317 17:29:43.530030 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:43.530585 containerd[1575]: time="2025-03-17T17:29:43.530527787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qrrxc,Uid:8c236958-a2c1-4e5c-be6e-a0d4225eb58e,Namespace:kube-system,Attempt:0,}" Mar 17 17:29:43.552318 containerd[1575]: time="2025-03-17T17:29:43.552172338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:29:43.552318 containerd[1575]: time="2025-03-17T17:29:43.552238496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:29:43.552318 containerd[1575]: time="2025-03-17T17:29:43.552256136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:29:43.552620 containerd[1575]: time="2025-03-17T17:29:43.552352293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:29:43.583818 containerd[1575]: time="2025-03-17T17:29:43.583777093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qrrxc,Uid:8c236958-a2c1-4e5c-be6e-a0d4225eb58e,Namespace:kube-system,Attempt:0,} returns sandbox id \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\"" Mar 17 17:29:43.584627 kubelet[2786]: E0317 17:29:43.584603 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:43.588219 containerd[1575]: time="2025-03-17T17:29:43.588184313Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:29:43.605474 containerd[1575]: time="2025-03-17T17:29:43.605416004Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f746c9586f202198deab19b767851dc2562c2e20b31a8397820bb67b5f765b9\"" Mar 17 17:29:43.610552 containerd[1575]: time="2025-03-17T17:29:43.610486523Z" level=info msg="StartContainer for \"4f746c9586f202198deab19b767851dc2562c2e20b31a8397820bb67b5f765b9\"" Mar 17 17:29:43.661349 containerd[1575]: time="2025-03-17T17:29:43.661308786Z" level=info msg="StartContainer for \"4f746c9586f202198deab19b767851dc2562c2e20b31a8397820bb67b5f765b9\" returns successfully" Mar 17 17:29:43.701477 containerd[1575]: time="2025-03-17T17:29:43.701255235Z" level=info msg="shim disconnected" id=4f746c9586f202198deab19b767851dc2562c2e20b31a8397820bb67b5f765b9 namespace=k8s.io Mar 17 17:29:43.701477 containerd[1575]: time="2025-03-17T17:29:43.701311553Z" level=warning msg="cleaning up after shim disconnected" id=4f746c9586f202198deab19b767851dc2562c2e20b31a8397820bb67b5f765b9 namespace=k8s.io Mar 17 17:29:43.701477 containerd[1575]: time="2025-03-17T17:29:43.701328073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:44.302176 kubelet[2786]: E0317 17:29:44.302135 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:44.305442 containerd[1575]: time="2025-03-17T17:29:44.305171735Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:29:44.316900 containerd[1575]: time="2025-03-17T17:29:44.316207776Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51bc4a73a2c70deb2ed84e8adf106a55248e9846e4421b0ceded771dc90ee91e\"" Mar 17 17:29:44.317703 containerd[1575]: time="2025-03-17T17:29:44.317675133Z" level=info msg="StartContainer for \"51bc4a73a2c70deb2ed84e8adf106a55248e9846e4421b0ceded771dc90ee91e\"" Mar 17 17:29:44.367126 containerd[1575]: time="2025-03-17T17:29:44.367009066Z" level=info msg="StartContainer for \"51bc4a73a2c70deb2ed84e8adf106a55248e9846e4421b0ceded771dc90ee91e\" returns successfully" Mar 17 17:29:44.397245 containerd[1575]: time="2025-03-17T17:29:44.396997718Z" level=info msg="shim disconnected" id=51bc4a73a2c70deb2ed84e8adf106a55248e9846e4421b0ceded771dc90ee91e namespace=k8s.io Mar 17 17:29:44.397245 containerd[1575]: time="2025-03-17T17:29:44.397063076Z" level=warning msg="cleaning up after shim disconnected" id=51bc4a73a2c70deb2ed84e8adf106a55248e9846e4421b0ceded771dc90ee91e namespace=k8s.io Mar 17 17:29:44.397245 containerd[1575]: time="2025-03-17T17:29:44.397072956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:44.490836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025179808.mount: Deactivated successfully. Mar 17 17:29:45.142305 kubelet[2786]: E0317 17:29:45.142262 2786 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:29:45.305818 kubelet[2786]: E0317 17:29:45.305761 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:45.310624 containerd[1575]: time="2025-03-17T17:29:45.310565343Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:29:45.331765 containerd[1575]: time="2025-03-17T17:29:45.331491996Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5\"" Mar 17 17:29:45.333133 containerd[1575]: time="2025-03-17T17:29:45.332158098Z" level=info msg="StartContainer for \"b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5\"" Mar 17 17:29:45.387645 containerd[1575]: time="2025-03-17T17:29:45.387601208Z" level=info msg="StartContainer for \"b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5\" returns successfully" Mar 17 17:29:45.411318 containerd[1575]: time="2025-03-17T17:29:45.411190671Z" level=info msg="shim disconnected" id=b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5 namespace=k8s.io Mar 17 17:29:45.411318 containerd[1575]: time="2025-03-17T17:29:45.411247350Z" level=warning msg="cleaning up after shim disconnected" id=b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5 namespace=k8s.io Mar 17 17:29:45.411318 containerd[1575]: time="2025-03-17T17:29:45.411259990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:45.490824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7a0f764495130bc740fd70277524308cb51077adf9ef948b9f8e5a6304e7cc5-rootfs.mount: Deactivated successfully. Mar 17 17:29:46.309393 kubelet[2786]: E0317 17:29:46.309365 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:46.313738 containerd[1575]: time="2025-03-17T17:29:46.313492038Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:29:46.325384 containerd[1575]: time="2025-03-17T17:29:46.324382822Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159\"" Mar 17 17:29:46.327180 containerd[1575]: time="2025-03-17T17:29:46.327133318Z" level=info msg="StartContainer for \"eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159\"" Mar 17 17:29:46.376770 containerd[1575]: time="2025-03-17T17:29:46.376687436Z" level=info msg="StartContainer for \"eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159\" returns successfully" Mar 17 17:29:46.393130 containerd[1575]: time="2025-03-17T17:29:46.393044572Z" level=info msg="shim disconnected" id=eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159 namespace=k8s.io Mar 17 17:29:46.393130 containerd[1575]: time="2025-03-17T17:29:46.393126970Z" level=warning msg="cleaning up after shim disconnected" id=eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159 namespace=k8s.io Mar 17 17:29:46.393337 containerd[1575]: time="2025-03-17T17:29:46.393135970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:29:46.491098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb8eb0a704bc449c859b95a00e3fbc093d9c130f8f3d2f55bcca6e779738d159-rootfs.mount: Deactivated successfully. Mar 17 17:29:47.313792 kubelet[2786]: E0317 17:29:47.313749 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:47.317339 containerd[1575]: time="2025-03-17T17:29:47.316846012Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:29:47.335862 containerd[1575]: time="2025-03-17T17:29:47.335797537Z" level=info msg="CreateContainer within sandbox \"123f769da7e0d33e359ad71f0dd4c12e69b44a361a9f803b076242d59c710784\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dd280a5dbecf0d4bf0e0124704fb6c6581e173047c4b6d6d3ed96042a8a2e5c\"" Mar 17 17:29:47.338042 containerd[1575]: time="2025-03-17T17:29:47.337130629Z" level=info msg="StartContainer for \"3dd280a5dbecf0d4bf0e0124704fb6c6581e173047c4b6d6d3ed96042a8a2e5c\"" Mar 17 17:29:47.402676 containerd[1575]: time="2025-03-17T17:29:47.402612784Z" level=info msg="StartContainer for \"3dd280a5dbecf0d4bf0e0124704fb6c6581e173047c4b6d6d3ed96042a8a2e5c\" returns successfully" Mar 17 17:29:47.659037 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:29:48.322738 kubelet[2786]: E0317 17:29:48.322700 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:48.338626 kubelet[2786]: I0317 17:29:48.337686 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qrrxc" podStartSLOduration=7.337668913 podStartE2EDuration="7.337668913s" podCreationTimestamp="2025-03-17 17:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:29:48.337521515 +0000 UTC m=+88.367763268" watchObservedRunningTime="2025-03-17 17:29:48.337668913 +0000 UTC m=+88.367910626" Mar 17 17:29:49.532166 kubelet[2786]: E0317 17:29:49.532102 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:50.494170 systemd-networkd[1237]: lxc_health: Link UP Mar 17 17:29:50.506528 systemd-networkd[1237]: lxc_health: Gained carrier Mar 17 17:29:51.531736 kubelet[2786]: E0317 17:29:51.531688 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:51.840124 systemd-networkd[1237]: lxc_health: Gained IPv6LL Mar 17 17:29:52.332401 kubelet[2786]: E0317 17:29:52.332321 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:53.334229 kubelet[2786]: E0317 17:29:53.334192 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:54.074693 kubelet[2786]: E0317 17:29:54.074643 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:29:56.801839 sshd[4653]: Connection closed by 10.0.0.1 port 56736 Mar 17 17:29:56.803047 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:56.806232 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:29:56.806743 systemd[1]: sshd@26-10.0.0.72:22-10.0.0.1:56736.service: Deactivated successfully. Mar 17 17:29:56.809327 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:29:56.810509 systemd-logind[1559]: Removed session 27.