Dec 13 13:27:24.873866 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:27:24.873887 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:27:24.873897 kernel: KASLR enabled Dec 13 13:27:24.873902 kernel: efi: EFI v2.7 by EDK II Dec 13 13:27:24.873908 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:27:24.873914 kernel: random: crng init done Dec 13 13:27:24.873921 kernel: secureboot: Secure boot disabled Dec 13 13:27:24.873926 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:24.873932 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:27:24.873939 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:27:24.873945 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873951 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873957 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873963 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873970 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873977 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873984 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873990 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.873996 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:24.874002 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:27:24.874008 kernel: NUMA: Failed to initialise from firmware Dec 13 13:27:24.874014 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:24.874020 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 13:27:24.874026 kernel: Zone ranges: Dec 13 13:27:24.874033 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:24.874040 kernel: DMA32 empty Dec 13 13:27:24.874046 kernel: Normal empty Dec 13 13:27:24.874052 kernel: Movable zone start for each node Dec 13 13:27:24.874058 kernel: Early memory node ranges Dec 13 13:27:24.874064 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:27:24.874070 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:27:24.874076 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:27:24.874082 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:27:24.874088 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:27:24.874094 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:27:24.874100 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:27:24.874106 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:27:24.874113 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:27:24.874119 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:24.874126 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:27:24.874135 kernel: psci: probing for conduit method from ACPI. Dec 13 13:27:24.874141 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:27:24.874148 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:27:24.874155 kernel: psci: Trusted OS migration not required Dec 13 13:27:24.874162 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:27:24.874168 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:27:24.874175 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:27:24.874182 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:27:24.874189 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:27:24.874195 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:27:24.874201 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:27:24.874223 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:27:24.874229 kernel: CPU features: detected: Spectre-v4 Dec 13 13:27:24.874237 kernel: CPU features: detected: Spectre-BHB Dec 13 13:27:24.874243 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:27:24.874250 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:27:24.874257 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:27:24.874268 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:27:24.874275 kernel: alternatives: applying boot alternatives Dec 13 13:27:24.874282 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:27:24.874289 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:24.874296 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:27:24.874303 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:27:24.874309 kernel: Fallback order for Node 0: 0 Dec 13 13:27:24.874317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:27:24.874324 kernel: Policy zone: DMA Dec 13 13:27:24.874330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:24.874337 kernel: software IO TLB: area num 4. Dec 13 13:27:24.874344 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:27:24.874351 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Dec 13 13:27:24.874357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:27:24.874364 kernel: trace event string verifier disabled Dec 13 13:27:24.874370 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:24.874377 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:24.874384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:27:24.874391 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:24.874398 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:24.874405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:24.874412 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:27:24.874418 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:27:24.874424 kernel: GICv3: 256 SPIs implemented Dec 13 13:27:24.874431 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:27:24.874437 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:27:24.874444 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:27:24.874459 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:27:24.874466 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:27:24.874472 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:27:24.874481 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:27:24.874487 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:27:24.874494 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:27:24.874500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:24.874507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:24.874513 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:27:24.874520 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:27:24.874526 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:27:24.874533 kernel: arm-pv: using stolen time PV Dec 13 13:27:24.874540 kernel: Console: colour dummy device 80x25 Dec 13 13:27:24.874546 kernel: ACPI: Core revision 20230628 Dec 13 13:27:24.874554 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:27:24.874561 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:24.874568 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:24.874578 kernel: landlock: Up and running. Dec 13 13:27:24.874585 kernel: SELinux: Initializing. Dec 13 13:27:24.874591 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.874598 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.874605 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:24.874612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:24.874620 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:24.874626 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:24.874633 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:27:24.874640 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:27:24.874646 kernel: Remapping and enabling EFI services. Dec 13 13:27:24.874653 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:24.874659 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:27:24.874666 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:27:24.874673 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:27:24.874681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:24.874687 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:27:24.874698 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:27:24.874706 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:27:24.874714 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:27:24.874721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:24.874727 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:27:24.874734 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:27:24.874741 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:27:24.874755 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:27:24.874762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:24.874769 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:27:24.874776 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:27:24.874783 kernel: SMP: Total of 4 processors activated. Dec 13 13:27:24.874790 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:27:24.874797 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:27:24.874804 kernel: CPU features: detected: Common not Private translations Dec 13 13:27:24.874811 kernel: CPU features: detected: CRC32 instructions Dec 13 13:27:24.874820 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:27:24.874827 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:27:24.874834 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:27:24.874841 kernel: CPU features: detected: Privileged Access Never Dec 13 13:27:24.874847 kernel: CPU features: detected: RAS Extension Support Dec 13 13:27:24.874854 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:27:24.874861 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:27:24.874868 kernel: alternatives: applying system-wide alternatives Dec 13 13:27:24.874875 kernel: devtmpfs: initialized Dec 13 13:27:24.874884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:24.874892 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.874898 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:24.874905 kernel: SMBIOS 3.0.0 present. Dec 13 13:27:24.874912 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:27:24.874919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:24.874926 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:27:24.874933 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:27:24.874940 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:27:24.874949 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:24.874956 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:24.874963 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:24.874970 kernel: cpuidle: using governor menu Dec 13 13:27:24.874977 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:27:24.874989 kernel: ASID allocator initialised with 32768 entries Dec 13 13:27:24.874996 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:24.875003 kernel: Serial: AMBA PL011 UART driver Dec 13 13:27:24.875010 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:27:24.875018 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:27:24.875025 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:27:24.875032 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:24.875039 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:24.875046 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:27:24.875053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:27:24.875060 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:24.875067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:24.875074 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:27:24.875082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:27:24.875089 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:24.875096 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:24.875103 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:24.875109 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:24.875116 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:27:24.875123 kernel: ACPI: Interpreter enabled Dec 13 13:27:24.875130 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:27:24.875137 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:27:24.875145 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:27:24.875152 kernel: printk: console [ttyAMA0] enabled Dec 13 13:27:24.875159 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:24.875294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:24.875368 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:27:24.875436 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:27:24.875563 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:27:24.875642 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:27:24.875652 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:27:24.875659 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:24.875733 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:27:24.875806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:27:24.875867 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:27:24.875923 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:24.876003 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:27:24.876076 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:24.876141 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:27:24.876206 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:27:24.876279 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:27:24.876344 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:27:24.876408 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:27:24.876489 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:27:24.876550 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:27:24.876615 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:27:24.876674 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:27:24.876683 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:27:24.876690 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:27:24.876698 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:27:24.876705 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:27:24.876714 kernel: iommu: Default domain type: Translated Dec 13 13:27:24.876721 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:27:24.876728 kernel: efivars: Registered efivars operations Dec 13 13:27:24.876735 kernel: vgaarb: loaded Dec 13 13:27:24.876742 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:27:24.876756 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:24.876764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:24.876771 kernel: pnp: PnP ACPI init Dec 13 13:27:24.876852 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:27:24.876866 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:27:24.876873 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:24.876880 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:27:24.876887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:27:24.876894 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:24.876901 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:27:24.876909 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:27:24.876916 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:27:24.876924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.876932 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.876939 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:24.876946 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:24.876953 kernel: kvm [1]: HYP mode not available Dec 13 13:27:24.876960 kernel: Initialise system trusted keyrings Dec 13 13:27:24.876967 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:27:24.876974 kernel: Key type asymmetric registered Dec 13 13:27:24.876981 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:24.876990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:27:24.876997 kernel: io scheduler mq-deadline registered Dec 13 13:27:24.877004 kernel: io scheduler kyber registered Dec 13 13:27:24.877011 kernel: io scheduler bfq registered Dec 13 13:27:24.877018 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:27:24.877025 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:27:24.877033 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:27:24.877107 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:27:24.877117 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:24.877127 kernel: thunder_xcv, ver 1.0 Dec 13 13:27:24.877133 kernel: thunder_bgx, ver 1.0 Dec 13 13:27:24.877140 kernel: nicpf, ver 1.0 Dec 13 13:27:24.877147 kernel: nicvf, ver 1.0 Dec 13 13:27:24.877220 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:27:24.877288 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:27:24 UTC (1734096444) Dec 13 13:27:24.877299 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:27:24.877306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:27:24.877315 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:27:24.877322 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:27:24.877329 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:24.877336 kernel: Segment Routing with IPv6 Dec 13 13:27:24.877343 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:24.877350 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:24.877357 kernel: Key type dns_resolver registered Dec 13 13:27:24.877364 kernel: registered taskstats version 1 Dec 13 13:27:24.877371 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:24.877380 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:27:24.877387 kernel: Key type .fscrypt registered Dec 13 13:27:24.877394 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:24.877401 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:27:24.877408 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:24.877415 kernel: ima: No architecture policies found Dec 13 13:27:24.877422 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:27:24.877429 kernel: clk: Disabling unused clocks Dec 13 13:27:24.877436 kernel: Freeing unused kernel memory: 39936K Dec 13 13:27:24.877444 kernel: Run /init as init process Dec 13 13:27:24.877497 kernel: with arguments: Dec 13 13:27:24.877504 kernel: /init Dec 13 13:27:24.877511 kernel: with environment: Dec 13 13:27:24.877518 kernel: HOME=/ Dec 13 13:27:24.877525 kernel: TERM=linux Dec 13 13:27:24.877531 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:24.877540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:24.877551 systemd[1]: Detected virtualization kvm. Dec 13 13:27:24.877559 systemd[1]: Detected architecture arm64. Dec 13 13:27:24.877566 systemd[1]: Running in initrd. Dec 13 13:27:24.877577 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:24.877585 systemd[1]: Hostname set to . Dec 13 13:27:24.877593 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:24.877600 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:24.877608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:24.877618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:24.877626 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:24.877634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:24.877642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:24.877653 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:24.877663 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:24.877674 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:24.877682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:24.877689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:24.877697 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:24.877705 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:24.877712 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:24.877720 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:24.877727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:24.877735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:24.877744 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:24.877758 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:24.877766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:24.877774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:24.877782 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:24.877789 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:24.877797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:24.877804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:24.877812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:24.877821 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:24.877829 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:24.877836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:24.877844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:24.877851 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:24.877859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:24.877866 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:24.877895 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 13:27:24.877916 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:24.877924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:24.877932 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:24.877940 systemd-journald[237]: Journal started Dec 13 13:27:24.877963 systemd-journald[237]: Runtime Journal (/run/log/journal/b9f8eb53281340cd973b1512afd583c1) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:27:24.864772 systemd-modules-load[239]: Inserted module 'overlay' Dec 13 13:27:24.881251 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:24.881270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:24.883029 systemd-modules-load[239]: Inserted module 'br_netfilter' Dec 13 13:27:24.883934 kernel: Bridge firewalling registered Dec 13 13:27:24.884088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:24.891616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:24.893376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:24.895619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:24.899112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:24.905723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:24.907964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:24.909378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:24.920624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:24.921880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:24.926645 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:24.938863 dracut-cmdline[280]: dracut-dracut-053 Dec 13 13:27:24.941174 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:27:24.945351 systemd-resolved[277]: Positive Trust Anchors: Dec 13 13:27:24.945370 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:24.945401 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:24.949954 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 13:27:24.950871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:24.952349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:25.012480 kernel: SCSI subsystem initialized Dec 13 13:27:25.016472 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:25.023468 kernel: iscsi: registered transport (tcp) Dec 13 13:27:25.036473 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:25.036520 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:25.079480 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:25.087694 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:25.103491 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:25.103559 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:25.104652 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:25.151474 kernel: raid6: neonx8 gen() 15788 MB/s Dec 13 13:27:25.168474 kernel: raid6: neonx4 gen() 15834 MB/s Dec 13 13:27:25.185464 kernel: raid6: neonx2 gen() 13217 MB/s Dec 13 13:27:25.202460 kernel: raid6: neonx1 gen() 10549 MB/s Dec 13 13:27:25.219459 kernel: raid6: int64x8 gen() 6793 MB/s Dec 13 13:27:25.236459 kernel: raid6: int64x4 gen() 7352 MB/s Dec 13 13:27:25.253459 kernel: raid6: int64x2 gen() 6112 MB/s Dec 13 13:27:25.270463 kernel: raid6: int64x1 gen() 5058 MB/s Dec 13 13:27:25.270484 kernel: raid6: using algorithm neonx4 gen() 15834 MB/s Dec 13 13:27:25.287469 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Dec 13 13:27:25.287493 kernel: raid6: using neon recovery algorithm Dec 13 13:27:25.292461 kernel: xor: measuring software checksum speed Dec 13 13:27:25.293525 kernel: 8regs : 19985 MB/sec Dec 13 13:27:25.293538 kernel: 32regs : 21710 MB/sec Dec 13 13:27:25.294466 kernel: arm64_neon : 27936 MB/sec Dec 13 13:27:25.294479 kernel: xor: using function: arm64_neon (27936 MB/sec) Dec 13 13:27:25.345483 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:25.357133 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:25.367689 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:25.378701 systemd-udevd[462]: Using default interface naming scheme 'v255'. Dec 13 13:27:25.381797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:25.384013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:25.399115 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Dec 13 13:27:25.425666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:25.437611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:25.478481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:25.487600 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:25.500489 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:25.501686 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:25.502950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:25.505301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:25.513595 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:25.522327 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:25.527890 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:27:25.538055 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:27:25.538165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:25.538176 kernel: GPT:9289727 != 19775487 Dec 13 13:27:25.538191 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:25.538202 kernel: GPT:9289727 != 19775487 Dec 13 13:27:25.538211 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:25.538220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:25.528355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:25.528470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:25.529830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:25.530814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:25.530942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.532905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.543929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.553603 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Dec 13 13:27:25.555289 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:27:25.557460 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (512) Dec 13 13:27:25.562097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.569416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:25.573523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:27:25.576981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:27:25.577865 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:27:25.591595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:25.593608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:25.598949 disk-uuid[552]: Primary Header is updated. Dec 13 13:27:25.598949 disk-uuid[552]: Secondary Entries is updated. Dec 13 13:27:25.598949 disk-uuid[552]: Secondary Header is updated. Dec 13 13:27:25.606469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:25.618087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:26.615501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:26.615650 disk-uuid[553]: The operation has completed successfully. Dec 13 13:27:26.636179 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:26.636273 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:26.662714 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:26.665726 sh[572]: Success Dec 13 13:27:26.679483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:27:26.723039 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:26.724549 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:26.725262 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:26.735934 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:27:26.735967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:26.736779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:26.736795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:26.737809 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:26.740910 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:26.741997 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:26.756680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:26.758009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:26.765020 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:26.765058 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:26.765068 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:26.767642 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:26.774595 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:26.777480 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:26.783606 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:26.787612 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:26.864539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:26.882676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:26.907575 systemd-networkd[763]: lo: Link UP Dec 13 13:27:26.907583 systemd-networkd[763]: lo: Gained carrier Dec 13 13:27:26.909812 systemd-networkd[763]: Enumeration completed Dec 13 13:27:26.909910 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:26.911071 ignition[663]: Ignition 2.20.0 Dec 13 13:27:26.910423 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:26.911078 ignition[663]: Stage: fetch-offline Dec 13 13:27:26.910426 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:26.911116 ignition[663]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:26.910807 systemd[1]: Reached target network.target - Network. Dec 13 13:27:26.911125 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:26.911750 systemd-networkd[763]: eth0: Link UP Dec 13 13:27:26.911328 ignition[663]: parsed url from cmdline: "" Dec 13 13:27:26.911754 systemd-networkd[763]: eth0: Gained carrier Dec 13 13:27:26.911331 ignition[663]: no config URL provided Dec 13 13:27:26.911762 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:26.911336 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:26.911342 ignition[663]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:26.911370 ignition[663]: op(1): [started] loading QEMU firmware config module Dec 13 13:27:26.911375 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:27:26.917080 ignition[663]: op(1): [finished] loading QEMU firmware config module Dec 13 13:27:26.941506 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:26.965178 ignition[663]: parsing config with SHA512: 5d529bc2e2117a1dc6a8754ab1a8f4ccc2b4e76408894061d42b434f72bd956f8d96ec9febdd455964d45b323a7bcb4ad4e7594bf6b99b7f9dd7221359954a17 Dec 13 13:27:26.971601 unknown[663]: fetched base config from "system" Dec 13 13:27:26.971611 unknown[663]: fetched user config from "qemu" Dec 13 13:27:26.972066 ignition[663]: fetch-offline: fetch-offline passed Dec 13 13:27:26.972140 ignition[663]: Ignition finished successfully Dec 13 13:27:26.974298 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:26.975626 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:27:26.985618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:26.996284 ignition[770]: Ignition 2.20.0 Dec 13 13:27:26.996295 ignition[770]: Stage: kargs Dec 13 13:27:26.996475 ignition[770]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:26.996486 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:26.997352 ignition[770]: kargs: kargs passed Dec 13 13:27:26.997398 ignition[770]: Ignition finished successfully Dec 13 13:27:27.000278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:27.013639 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:27.023588 ignition[779]: Ignition 2.20.0 Dec 13 13:27:27.023603 ignition[779]: Stage: disks Dec 13 13:27:27.023793 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.023803 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:27.024718 ignition[779]: disks: disks passed Dec 13 13:27:27.024772 ignition[779]: Ignition finished successfully Dec 13 13:27:27.027509 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:27.029097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:27.030601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:27.032089 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:27.033644 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:27.034917 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:27.048622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:27.058738 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:27:27.063909 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:27.075557 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:27.120471 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:27:27.120651 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:27.121725 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:27.131570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:27.133537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:27.134303 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:27.134346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:27.134367 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:27.139538 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:27.142729 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:27.146015 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Dec 13 13:27:27.146042 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:27.146053 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:27.147471 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:27.149463 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:27.150697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:27.182351 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:27.185505 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:27.188689 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:27.191764 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:27.264511 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:27.273573 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:27.275951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:27.280464 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:27.299031 ignition[914]: INFO : Ignition 2.20.0 Dec 13 13:27:27.299031 ignition[914]: INFO : Stage: mount Dec 13 13:27:27.302116 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.302116 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:27.302116 ignition[914]: INFO : mount: mount passed Dec 13 13:27:27.302116 ignition[914]: INFO : Ignition finished successfully Dec 13 13:27:27.299071 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:27.301883 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:27.314562 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:27.735360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:27.744659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:27.751467 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Dec 13 13:27:27.753742 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:27.753774 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:27.753784 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:27.755467 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:27.756812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:27.773420 ignition[944]: INFO : Ignition 2.20.0 Dec 13 13:27:27.773420 ignition[944]: INFO : Stage: files Dec 13 13:27:27.775096 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.775096 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:27.775096 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:27.778435 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:27.778435 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:27.781343 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:27.782638 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:27.782638 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:27.781970 unknown[944]: wrote ssh authorized keys file for user: core Dec 13 13:27:27.786290 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:27:27.786290 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:27:27.846323 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:27:28.032804 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:27:28.032804 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:28.036418 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 13:27:28.287418 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:27:28.336269 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:28.336269 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:27:28.339831 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 13:27:28.570262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:27:28.681280 systemd-networkd[763]: eth0: Gained IPv6LL Dec 13 13:27:28.772593 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:27:28.772593 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:27:28.775284 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:28.797354 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:28.800587 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:28.802519 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:28.802519 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:28.802519 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:28.802519 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:28.802519 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:28.802519 ignition[944]: INFO : files: files passed Dec 13 13:27:28.802519 ignition[944]: INFO : Ignition finished successfully Dec 13 13:27:28.803200 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:28.814598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:28.816251 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:28.818630 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:28.818707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:28.824243 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:27:28.826279 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:28.826279 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:28.828573 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:28.828088 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:28.830066 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:28.838599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:28.859047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:28.859156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:28.860809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:28.862115 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:28.863558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:28.864409 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:28.888130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:28.897682 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:28.905607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:28.906555 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:28.908138 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:28.909427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:28.909570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:28.911402 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:28.912889 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:28.914110 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:28.915333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:28.916774 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:28.918192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:28.919513 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:28.920971 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:28.922381 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:28.923674 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:28.924814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:28.924944 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:28.926674 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:28.928099 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:28.929537 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:28.930985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:28.931947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:28.932069 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:28.934102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:28.934218 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:28.935670 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:28.936808 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:28.941495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:28.942443 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:28.944049 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:28.945172 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:28.945261 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:28.946344 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:28.946413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:28.947542 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:28.947653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:28.948966 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:28.949065 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:28.962684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:28.963377 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:28.963532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:28.965803 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:28.966950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:28.967076 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:28.968368 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:28.968479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:28.974342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:28.975104 ignition[998]: INFO : Ignition 2.20.0 Dec 13 13:27:28.975104 ignition[998]: INFO : Stage: umount Dec 13 13:27:28.975104 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:28.975104 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:28.974440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:28.981785 ignition[998]: INFO : umount: umount passed Dec 13 13:27:28.981785 ignition[998]: INFO : Ignition finished successfully Dec 13 13:27:28.977849 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:28.977964 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:28.979181 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:28.980899 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:28.980958 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:28.982460 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:28.982502 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:28.984008 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:28.984054 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:28.985154 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:28.985193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:28.986541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:28.987805 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:28.989995 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:28.993364 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:28.993481 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:28.998286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:28.998340 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:29.004531 systemd-networkd[763]: eth0: DHCPv6 lease lost Dec 13 13:27:29.006278 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:29.006399 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:29.008132 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:29.008164 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:29.018582 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:29.019255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:29.019312 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:29.020783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:29.020822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:29.022261 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:29.022302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:29.023875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:29.033585 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:29.034532 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:29.044142 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:29.044299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:29.046401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:29.046441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:29.047758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:29.047787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:29.049341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:29.049394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:29.051689 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:29.051749 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:29.053952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:29.053998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:29.069617 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:29.070650 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:29.070715 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:29.072570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:29.072620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:29.074530 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:29.074616 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:29.076831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:29.076914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:29.078920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:29.079725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:29.079798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:29.081782 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:29.090875 systemd[1]: Switching root. Dec 13 13:27:29.125236 systemd-journald[237]: Journal stopped Dec 13 13:27:29.841924 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:29.841998 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:29.842011 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:29.842024 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:29.842033 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:29.842042 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:29.842051 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:29.842061 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:29.842070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:29.842079 kernel: audit: type=1403 audit(1734096449.302:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:29.842089 systemd[1]: Successfully loaded SELinux policy in 33.702ms. Dec 13 13:27:29.842108 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.151ms. Dec 13 13:27:29.842121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:29.842131 systemd[1]: Detected virtualization kvm. Dec 13 13:27:29.842143 systemd[1]: Detected architecture arm64. Dec 13 13:27:29.842153 systemd[1]: Detected first boot. Dec 13 13:27:29.842162 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:29.842172 zram_generator::config[1043]: No configuration found. Dec 13 13:27:29.842183 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:29.842193 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:29.842204 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:29.842214 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:29.842224 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:29.842234 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:29.842244 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:29.842254 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:29.842264 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:29.842273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:29.842283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:29.842295 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:29.842305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:29.842319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:29.842329 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:29.842338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:29.842348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:29.842359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:29.842370 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:27:29.842380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:29.842391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:29.842401 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:29.842410 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:29.842420 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:29.842430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:29.842440 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:29.842484 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:29.842497 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:29.842509 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:29.842520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:29.842530 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:29.842540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:29.842549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:29.842559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:29.842569 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:29.842579 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:29.842589 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:29.842600 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:29.842610 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:29.842620 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:29.842632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:29.842642 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:29.842652 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:29.842662 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:29.842672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:29.842684 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:29.842693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:29.842703 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:29.842713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:29.842723 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:29.842742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:29.842756 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:29.842766 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:29.842778 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:29.842789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:29.842799 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:29.842808 kernel: fuse: init (API version 7.39) Dec 13 13:27:29.842817 kernel: loop: module loaded Dec 13 13:27:29.842826 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:29.842835 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:29.842845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:29.842856 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:29.842866 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:29.842904 systemd-journald[1107]: Collecting audit messages is disabled. Dec 13 13:27:29.842931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:29.842941 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:29.842951 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:29.842961 systemd-journald[1107]: Journal started Dec 13 13:27:29.842986 systemd-journald[1107]: Runtime Journal (/run/log/journal/b9f8eb53281340cd973b1512afd583c1) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:27:29.665532 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:29.683228 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:27:29.683573 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:29.845852 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:29.846507 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:29.847371 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:29.848323 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:29.849205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:29.850157 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:29.851368 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:29.852356 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:29.853550 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:29.854727 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:29.854885 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:29.856042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:29.856189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:29.857260 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:29.857395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:29.858521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:29.858641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:29.859760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:29.859897 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:29.860923 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:29.861045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:29.862296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:29.863363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:29.864581 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:29.876309 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:29.883625 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:29.885392 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:29.886249 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:29.886274 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:29.887938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:29.889893 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:29.892664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:29.893551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:29.895166 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:29.896914 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:29.897820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:29.899631 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:29.900485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:29.904119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:29.906090 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:29.911661 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:29.914376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:29.915613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:29.916727 systemd-journald[1107]: Time spent on flushing to /var/log/journal/b9f8eb53281340cd973b1512afd583c1 is 18.003ms for 864 entries. Dec 13 13:27:29.916727 systemd-journald[1107]: System Journal (/var/log/journal/b9f8eb53281340cd973b1512afd583c1) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:27:30.009645 systemd-journald[1107]: Received client request to flush runtime journal. Dec 13 13:27:30.009702 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:27:30.009722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:30.009752 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:27:29.917748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:29.918864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:29.938719 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:29.949585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:29.951025 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:29.956004 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:29.968713 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:29.972857 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:29.986833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:29.989565 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:27:30.011920 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 13:27:30.011939 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 13:27:30.012829 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:30.016581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:30.027879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:30.028507 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:30.034465 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 13:27:30.108290 kernel: loop3: detected capacity change from 0 to 116784 Dec 13 13:27:30.120489 kernel: loop4: detected capacity change from 0 to 113552 Dec 13 13:27:30.126488 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 13:27:30.131769 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:27:30.132166 (sd-merge)[1178]: Merged extensions into '/usr'. Dec 13 13:27:30.136207 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:30.136225 systemd[1]: Reloading... Dec 13 13:27:30.189618 zram_generator::config[1204]: No configuration found. Dec 13 13:27:30.222206 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:27:30.284295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:30.319081 systemd[1]: Reloading finished in 182 ms. Dec 13 13:27:30.356886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:27:30.358064 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:30.370667 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:30.372477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:30.381553 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:30.381570 systemd[1]: Reloading... Dec 13 13:27:30.390119 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:30.390317 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:30.390989 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:30.391215 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 13 13:27:30.391264 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 13 13:27:30.393897 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:30.393909 systemd-tmpfiles[1240]: Skipping /boot Dec 13 13:27:30.401942 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:30.401959 systemd-tmpfiles[1240]: Skipping /boot Dec 13 13:27:30.423473 zram_generator::config[1265]: No configuration found. Dec 13 13:27:30.506487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:30.541393 systemd[1]: Reloading finished in 159 ms. Dec 13 13:27:30.560259 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:27:30.572845 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:30.580862 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:30.583380 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:27:30.585432 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:27:30.588729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:30.595763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:30.598616 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:27:30.601328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:30.603217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:30.605621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:30.613753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:30.614805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:30.615554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:30.615684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:30.631836 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:27:30.633226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:27:30.636926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:30.637063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:30.638564 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:30.638692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:30.646011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:30.648860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:30.649981 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Dec 13 13:27:30.652039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:30.655713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:30.658679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:30.662773 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:27:30.665262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:27:30.666854 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:27:30.668929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:30.670458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:30.670598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:30.671836 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:30.672400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:30.673952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:30.674069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:30.691275 augenrules[1364]: No rules Dec 13 13:27:30.692066 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:30.693093 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:30.694407 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:27:30.698762 systemd[1]: Finished ensure-sysext.service. Dec 13 13:27:30.700856 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:27:30.706474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1357) Dec 13 13:27:30.715033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:30.717165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:30.720468 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1361) Dec 13 13:27:30.723706 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:30.729765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:30.732469 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1361) Dec 13 13:27:30.736914 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:30.737818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:30.740686 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:30.745649 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:27:30.747802 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:30.748236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:30.749478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:30.751950 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:30.752115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:30.753313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:30.753472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:30.772236 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:30.772683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:30.777074 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:27:30.777923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:30.777983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:30.784848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:30.791649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:27:30.796364 systemd-resolved[1307]: Positive Trust Anchors: Dec 13 13:27:30.796375 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:30.796406 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:30.805942 systemd-resolved[1307]: Defaulting to hostname 'linux'. Dec 13 13:27:30.809184 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:30.810119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:30.820215 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:27:30.828090 systemd-networkd[1388]: lo: Link UP Dec 13 13:27:30.828340 systemd-networkd[1388]: lo: Gained carrier Dec 13 13:27:30.829353 systemd-networkd[1388]: Enumeration completed Dec 13 13:27:30.830199 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:30.833647 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:30.833729 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:30.834087 systemd[1]: Reached target network.target - Network. Dec 13 13:27:30.834532 systemd-networkd[1388]: eth0: Link UP Dec 13 13:27:30.834614 systemd-networkd[1388]: eth0: Gained carrier Dec 13 13:27:30.834665 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:30.847740 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:27:30.848945 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:27:30.850504 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:27:30.852426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:30.853899 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:30.855259 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Dec 13 13:27:30.856613 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:27:30.857254 systemd-timesyncd[1389]: Initial clock synchronization to Fri 2024-12-13 13:27:31.048014 UTC. Dec 13 13:27:30.859898 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:27:30.862722 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:27:30.887077 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:30.905576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:30.918539 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:27:30.919668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:30.920515 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:30.921348 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:27:30.922285 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:27:30.923366 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:27:30.924291 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:27:30.925238 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:27:30.926241 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:27:30.926277 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:30.926952 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:30.928526 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:27:30.930698 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:27:30.943427 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:27:30.945545 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:27:30.946823 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:27:30.947687 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:30.948361 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:30.949106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:30.949137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:30.950046 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:27:30.951704 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:27:30.954577 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:30.954590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:27:30.956682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:27:30.957485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:27:30.958612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:27:30.963431 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:27:30.965134 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:27:30.968690 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:27:30.972515 jq[1416]: false Dec 13 13:27:30.977650 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:27:30.979521 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:27:30.979964 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:27:30.981624 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:27:30.984076 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:27:30.985843 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:27:30.987777 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:27:30.987863 extend-filesystems[1417]: Found loop3 Dec 13 13:27:30.987863 extend-filesystems[1417]: Found loop4 Dec 13 13:27:30.987863 extend-filesystems[1417]: Found loop5 Dec 13 13:27:30.987863 extend-filesystems[1417]: Found vda Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda1 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda2 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda3 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found usr Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda4 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda6 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda7 Dec 13 13:27:30.992089 extend-filesystems[1417]: Found vda9 Dec 13 13:27:30.992089 extend-filesystems[1417]: Checking size of /dev/vda9 Dec 13 13:27:30.988100 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:27:31.005802 dbus-daemon[1415]: [system] SELinux support is enabled Dec 13 13:27:30.990303 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:27:31.022702 jq[1428]: true Dec 13 13:27:30.990443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:27:30.996994 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:27:30.997299 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:27:31.006414 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:27:31.023158 jq[1436]: true Dec 13 13:27:31.013072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:27:31.013102 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:27:31.014208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:27:31.014224 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:27:31.028370 tar[1434]: linux-arm64/helm Dec 13 13:27:31.028780 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:27:31.034451 extend-filesystems[1417]: Resized partition /dev/vda9 Dec 13 13:27:31.044483 update_engine[1425]: I20241213 13:27:31.042881 1425 main.cc:92] Flatcar Update Engine starting Dec 13 13:27:31.050791 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:27:31.053663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1354) Dec 13 13:27:31.051227 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:27:31.053758 update_engine[1425]: I20241213 13:27:31.051944 1425 update_check_scheduler.cc:74] Next update check in 5m12s Dec 13 13:27:31.057571 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:27:31.067776 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:27:31.090519 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:27:31.114804 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:27:31.116744 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:27:31.116744 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:27:31.116744 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:27:31.124332 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Dec 13 13:27:31.127168 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:31.116787 systemd-logind[1423]: New seat seat0. Dec 13 13:27:31.120910 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:27:31.123783 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:27:31.123961 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:27:31.129419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:27:31.133036 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:27:31.148496 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:27:31.269871 containerd[1446]: time="2024-12-13T13:27:31.269790331Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:27:31.296211 containerd[1446]: time="2024-12-13T13:27:31.296106500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297534411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297584299Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297601830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297765461Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297782541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297836361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.297848771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.298005112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.298018915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.298031448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298084 containerd[1446]: time="2024-12-13T13:27:31.298040418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298328 containerd[1446]: time="2024-12-13T13:27:31.298107427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298328 containerd[1446]: time="2024-12-13T13:27:31.298284206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298419 containerd[1446]: time="2024-12-13T13:27:31.298390576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:31.298419 containerd[1446]: time="2024-12-13T13:27:31.298416503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:27:31.298649 containerd[1446]: time="2024-12-13T13:27:31.298614458Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:27:31.298689 containerd[1446]: time="2024-12-13T13:27:31.298677248Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302516331Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302569373Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302585428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302607464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302622947Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302755408Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:27:31.302996 containerd[1446]: time="2024-12-13T13:27:31.302986499Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303085619Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303102085Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303115355Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303128094Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303139972Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303151399Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303165 containerd[1446]: time="2024-12-13T13:27:31.303163769Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303179989Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303192235Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303204318Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303215500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303236143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303249086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303268050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303279 containerd[1446]: time="2024-12-13T13:27:31.303279765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303291725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303304545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303315890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303328383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303342145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303358529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303370243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303381589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303393794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303409 containerd[1446]: time="2024-12-13T13:27:31.303408130Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:27:31.303588 containerd[1446]: time="2024-12-13T13:27:31.303428896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303588 containerd[1446]: time="2024-12-13T13:27:31.303441634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.303588 containerd[1446]: time="2024-12-13T13:27:31.303451751Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304578164Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304615068Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304629035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304641404Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304651398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304664587Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304675728Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:27:31.306238 containerd[1446]: time="2024-12-13T13:27:31.304744457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:27:31.306417 containerd[1446]: time="2024-12-13T13:27:31.305192302Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:27:31.306417 containerd[1446]: time="2024-12-13T13:27:31.305243787Z" level=info msg="Connect containerd service" Dec 13 13:27:31.306417 containerd[1446]: time="2024-12-13T13:27:31.305274343Z" level=info msg="using legacy CRI server" Dec 13 13:27:31.306417 containerd[1446]: time="2024-12-13T13:27:31.305280978Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:27:31.306417 containerd[1446]: time="2024-12-13T13:27:31.305620610Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:27:31.306613 containerd[1446]: time="2024-12-13T13:27:31.306531045Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:31.306892 containerd[1446]: time="2024-12-13T13:27:31.306850648Z" level=info msg="Start subscribing containerd event" Dec 13 13:27:31.306981 containerd[1446]: time="2024-12-13T13:27:31.306965743Z" level=info msg="Start recovering state" Dec 13 13:27:31.307107 containerd[1446]: time="2024-12-13T13:27:31.307091733Z" level=info msg="Start event monitor" Dec 13 13:27:31.307164 containerd[1446]: time="2024-12-13T13:27:31.307151574Z" level=info msg="Start snapshots syncer" Dec 13 13:27:31.307222 containerd[1446]: time="2024-12-13T13:27:31.307209531Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:27:31.307267 containerd[1446]: time="2024-12-13T13:27:31.307257084Z" level=info msg="Start streaming server" Dec 13 13:27:31.307750 containerd[1446]: time="2024-12-13T13:27:31.307719388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:27:31.307821 containerd[1446]: time="2024-12-13T13:27:31.307798971Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:27:31.308028 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:27:31.310060 containerd[1446]: time="2024-12-13T13:27:31.309963652Z" level=info msg="containerd successfully booted in 0.042470s" Dec 13 13:27:31.414968 tar[1434]: linux-arm64/LICENSE Dec 13 13:27:31.415073 tar[1434]: linux-arm64/README.md Dec 13 13:27:31.427525 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:27:31.945716 systemd-networkd[1388]: eth0: Gained IPv6LL Dec 13 13:27:31.948674 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:27:31.950073 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:27:31.963781 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:27:31.966231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:31.966864 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:27:31.968683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:27:31.986330 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:27:31.989499 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:27:31.990735 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:27:31.992539 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:27:31.995095 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:27:31.998255 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:27:31.999278 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:27:31.999447 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:27:32.013294 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:27:32.023401 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:27:32.026605 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:27:32.028926 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:27:32.030501 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:27:32.461211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:32.462526 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:27:32.464959 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:32.467539 systemd[1]: Startup finished in 557ms (kernel) + 4.601s (initrd) + 3.202s (userspace) = 8.362s. Dec 13 13:27:32.484371 agetty[1522]: failed to open credentials directory Dec 13 13:27:32.484760 agetty[1521]: failed to open credentials directory Dec 13 13:27:32.991788 kubelet[1528]: E1213 13:27:32.991646 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:32.994384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:32.994555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:37.304027 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:27:37.305133 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:58418.service - OpenSSH per-connection server daemon (10.0.0.1:58418). Dec 13 13:27:37.382116 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 58418 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:37.383912 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:37.396448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:27:37.405777 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:27:37.407919 systemd-logind[1423]: New session 1 of user core. Dec 13 13:27:37.417512 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:27:37.419898 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:27:37.426425 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:27:37.497342 systemd[1545]: Queued start job for default target default.target. Dec 13 13:27:37.507434 systemd[1545]: Created slice app.slice - User Application Slice. Dec 13 13:27:37.507499 systemd[1545]: Reached target paths.target - Paths. Dec 13 13:27:37.507512 systemd[1545]: Reached target timers.target - Timers. Dec 13 13:27:37.508766 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:27:37.519156 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:27:37.519226 systemd[1545]: Reached target sockets.target - Sockets. Dec 13 13:27:37.519239 systemd[1545]: Reached target basic.target - Basic System. Dec 13 13:27:37.519281 systemd[1545]: Reached target default.target - Main User Target. Dec 13 13:27:37.519311 systemd[1545]: Startup finished in 87ms. Dec 13 13:27:37.519605 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:27:37.520991 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:27:37.583876 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Dec 13 13:27:37.624732 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:37.625932 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:37.629733 systemd-logind[1423]: New session 2 of user core. Dec 13 13:27:37.643627 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:27:37.697116 sshd[1558]: Connection closed by 10.0.0.1 port 58434 Dec 13 13:27:37.697599 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:37.708069 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:58434.service: Deactivated successfully. Dec 13 13:27:37.709565 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:27:37.711620 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:27:37.712802 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:58440.service - OpenSSH per-connection server daemon (10.0.0.1:58440). Dec 13 13:27:37.713597 systemd-logind[1423]: Removed session 2. Dec 13 13:27:37.754658 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 58440 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:37.755897 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:37.759539 systemd-logind[1423]: New session 3 of user core. Dec 13 13:27:37.771615 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:27:37.822401 sshd[1565]: Connection closed by 10.0.0.1 port 58440 Dec 13 13:27:37.822751 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:37.836865 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:58440.service: Deactivated successfully. Dec 13 13:27:37.838216 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:27:37.840539 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:27:37.853766 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:58456.service - OpenSSH per-connection server daemon (10.0.0.1:58456). Dec 13 13:27:37.854942 systemd-logind[1423]: Removed session 3. Dec 13 13:27:37.890079 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 58456 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:37.891192 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:37.894590 systemd-logind[1423]: New session 4 of user core. Dec 13 13:27:37.905622 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:27:37.956990 sshd[1572]: Connection closed by 10.0.0.1 port 58456 Dec 13 13:27:37.957349 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:37.971650 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:58456.service: Deactivated successfully. Dec 13 13:27:37.973053 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:27:37.975534 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:27:37.983704 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:58464.service - OpenSSH per-connection server daemon (10.0.0.1:58464). Dec 13 13:27:37.984877 systemd-logind[1423]: Removed session 4. Dec 13 13:27:38.019924 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 58464 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:38.021038 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:38.024486 systemd-logind[1423]: New session 5 of user core. Dec 13 13:27:38.036599 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:27:38.101248 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:27:38.101553 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:38.123223 sudo[1580]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:38.124566 sshd[1579]: Connection closed by 10.0.0.1 port 58464 Dec 13 13:27:38.125114 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:38.141858 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:58464.service: Deactivated successfully. Dec 13 13:27:38.144717 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:27:38.145917 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:27:38.147276 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:58474.service - OpenSSH per-connection server daemon (10.0.0.1:58474). Dec 13 13:27:38.147957 systemd-logind[1423]: Removed session 5. Dec 13 13:27:38.187015 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 58474 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:38.188249 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:38.191829 systemd-logind[1423]: New session 6 of user core. Dec 13 13:27:38.201636 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:27:38.253420 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:27:38.254053 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:38.257126 sudo[1589]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:38.261784 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:27:38.262048 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:38.279782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:38.302717 augenrules[1611]: No rules Dec 13 13:27:38.303832 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:38.305505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:38.306896 sudo[1588]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:38.308244 sshd[1587]: Connection closed by 10.0.0.1 port 58474 Dec 13 13:27:38.308675 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:38.318875 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:58474.service: Deactivated successfully. Dec 13 13:27:38.320326 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:27:38.321633 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:27:38.323756 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Dec 13 13:27:38.324719 systemd-logind[1423]: Removed session 6. Dec 13 13:27:38.363263 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:38.364361 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:38.368166 systemd-logind[1423]: New session 7 of user core. Dec 13 13:27:38.378605 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:27:38.430770 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:27:38.431051 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:38.761726 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:27:38.761868 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:27:39.002779 dockerd[1644]: time="2024-12-13T13:27:39.002723936Z" level=info msg="Starting up" Dec 13 13:27:39.170265 dockerd[1644]: time="2024-12-13T13:27:39.170116306Z" level=info msg="Loading containers: start." Dec 13 13:27:39.332517 kernel: Initializing XFRM netlink socket Dec 13 13:27:39.411401 systemd-networkd[1388]: docker0: Link UP Dec 13 13:27:39.446784 dockerd[1644]: time="2024-12-13T13:27:39.446657780Z" level=info msg="Loading containers: done." Dec 13 13:27:39.459564 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2854975363-merged.mount: Deactivated successfully. Dec 13 13:27:39.464140 dockerd[1644]: time="2024-12-13T13:27:39.464080610Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:27:39.464235 dockerd[1644]: time="2024-12-13T13:27:39.464187726Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:27:39.464391 dockerd[1644]: time="2024-12-13T13:27:39.464358844Z" level=info msg="Daemon has completed initialization" Dec 13 13:27:39.491921 dockerd[1644]: time="2024-12-13T13:27:39.491843724Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:27:39.492344 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:27:40.252334 containerd[1446]: time="2024-12-13T13:27:40.252266159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:27:40.971838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432012458.mount: Deactivated successfully. Dec 13 13:27:42.177594 containerd[1446]: time="2024-12-13T13:27:42.177546142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.178587 containerd[1446]: time="2024-12-13T13:27:42.178530424Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 13:27:42.179275 containerd[1446]: time="2024-12-13T13:27:42.179235131Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.183256 containerd[1446]: time="2024-12-13T13:27:42.183206005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.183932 containerd[1446]: time="2024-12-13T13:27:42.183898043Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.931590602s" Dec 13 13:27:42.183932 containerd[1446]: time="2024-12-13T13:27:42.183930702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 13:27:42.202012 containerd[1446]: time="2024-12-13T13:27:42.201906639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:27:43.245238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:43.254711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:43.349422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:43.353611 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:43.431960 kubelet[1917]: E1213 13:27:43.431908 1917 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:43.436091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:43.436434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:43.906695 containerd[1446]: time="2024-12-13T13:27:43.906634730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:43.907812 containerd[1446]: time="2024-12-13T13:27:43.907723241Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 13:27:43.908565 containerd[1446]: time="2024-12-13T13:27:43.908507489Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:43.911560 containerd[1446]: time="2024-12-13T13:27:43.911526276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:43.912741 containerd[1446]: time="2024-12-13T13:27:43.912661532Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.710717615s" Dec 13 13:27:43.912741 containerd[1446]: time="2024-12-13T13:27:43.912695616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 13:27:43.931405 containerd[1446]: time="2024-12-13T13:27:43.931372006Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:27:44.898643 containerd[1446]: time="2024-12-13T13:27:44.898589075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:44.899377 containerd[1446]: time="2024-12-13T13:27:44.899161885Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 13:27:44.900204 containerd[1446]: time="2024-12-13T13:27:44.900149681Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:44.905884 containerd[1446]: time="2024-12-13T13:27:44.905812788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:44.907081 containerd[1446]: time="2024-12-13T13:27:44.906845734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 975.438002ms" Dec 13 13:27:44.907081 containerd[1446]: time="2024-12-13T13:27:44.906878874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 13:27:44.926231 containerd[1446]: time="2024-12-13T13:27:44.926150838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:27:45.960802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748717252.mount: Deactivated successfully. Dec 13 13:27:46.147043 containerd[1446]: time="2024-12-13T13:27:46.146815772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:46.147839 containerd[1446]: time="2024-12-13T13:27:46.147514104Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 13:27:46.148540 containerd[1446]: time="2024-12-13T13:27:46.148503855Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:46.151610 containerd[1446]: time="2024-12-13T13:27:46.151577406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:46.152270 containerd[1446]: time="2024-12-13T13:27:46.152175374Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.225985901s" Dec 13 13:27:46.152270 containerd[1446]: time="2024-12-13T13:27:46.152210166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 13:27:46.170739 containerd[1446]: time="2024-12-13T13:27:46.170697939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:27:46.738248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719897426.mount: Deactivated successfully. Dec 13 13:27:47.357160 containerd[1446]: time="2024-12-13T13:27:47.356973229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.358052 containerd[1446]: time="2024-12-13T13:27:47.357813762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 13:27:47.358993 containerd[1446]: time="2024-12-13T13:27:47.358936892Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.362408 containerd[1446]: time="2024-12-13T13:27:47.362362601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.363079 containerd[1446]: time="2024-12-13T13:27:47.363045970Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.192313522s" Dec 13 13:27:47.363079 containerd[1446]: time="2024-12-13T13:27:47.363077098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:27:47.380626 containerd[1446]: time="2024-12-13T13:27:47.380599235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:27:47.818638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321811647.mount: Deactivated successfully. Dec 13 13:27:47.823044 containerd[1446]: time="2024-12-13T13:27:47.822811313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.823726 containerd[1446]: time="2024-12-13T13:27:47.823590753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 13:27:47.824720 containerd[1446]: time="2024-12-13T13:27:47.824665105Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.827673 containerd[1446]: time="2024-12-13T13:27:47.827625862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:47.828369 containerd[1446]: time="2024-12-13T13:27:47.828230048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 447.59784ms" Dec 13 13:27:47.828369 containerd[1446]: time="2024-12-13T13:27:47.828262459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:27:47.845854 containerd[1446]: time="2024-12-13T13:27:47.845677333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:27:48.429341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718599108.mount: Deactivated successfully. Dec 13 13:27:50.165960 containerd[1446]: time="2024-12-13T13:27:50.165899190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:50.166336 containerd[1446]: time="2024-12-13T13:27:50.166294098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 13:27:50.167124 containerd[1446]: time="2024-12-13T13:27:50.167080306Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:50.170295 containerd[1446]: time="2024-12-13T13:27:50.170259162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:50.171942 containerd[1446]: time="2024-12-13T13:27:50.171905718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.326198507s" Dec 13 13:27:50.171972 containerd[1446]: time="2024-12-13T13:27:50.171941546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 13:27:53.686992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:27:53.693879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:53.807068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:53.810585 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:53.847677 kubelet[2150]: E1213 13:27:53.847618 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:53.850477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:53.850619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:54.350913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:54.359866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:54.377556 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Dec 13 13:27:54.377574 systemd[1]: Reloading... Dec 13 13:27:54.440393 zram_generator::config[2205]: No configuration found. Dec 13 13:27:54.589610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:54.640360 systemd[1]: Reloading finished in 262 ms. Dec 13 13:27:54.689934 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:54.693533 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:27:54.693722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:54.695286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:54.787574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:54.790978 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:27:54.833361 kubelet[2252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:27:54.833361 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:27:54.833361 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:27:54.834505 kubelet[2252]: I1213 13:27:54.834256 2252 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:27:56.077482 kubelet[2252]: I1213 13:27:56.077427 2252 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:27:56.077482 kubelet[2252]: I1213 13:27:56.077478 2252 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:27:56.077980 kubelet[2252]: I1213 13:27:56.077692 2252 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:27:56.116792 kubelet[2252]: I1213 13:27:56.116676 2252 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:27:56.120865 kubelet[2252]: E1213 13:27:56.119833 2252 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.126310 kubelet[2252]: I1213 13:27:56.126284 2252 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:27:56.126671 kubelet[2252]: I1213 13:27:56.126652 2252 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:27:56.126909 kubelet[2252]: I1213 13:27:56.126890 2252 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:27:56.127026 kubelet[2252]: I1213 13:27:56.127014 2252 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:27:56.127078 kubelet[2252]: I1213 13:27:56.127070 2252 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:27:56.127240 kubelet[2252]: I1213 13:27:56.127223 2252 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:27:56.129603 kubelet[2252]: I1213 13:27:56.129582 2252 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:27:56.129698 kubelet[2252]: I1213 13:27:56.129687 2252 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:27:56.129764 kubelet[2252]: I1213 13:27:56.129756 2252 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:27:56.129826 kubelet[2252]: I1213 13:27:56.129818 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:27:56.130327 kubelet[2252]: W1213 13:27:56.130273 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.130327 kubelet[2252]: E1213 13:27:56.130327 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.131911 kubelet[2252]: W1213 13:27:56.131863 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.131911 kubelet[2252]: E1213 13:27:56.131909 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.132260 kubelet[2252]: I1213 13:27:56.132215 2252 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:27:56.133036 kubelet[2252]: I1213 13:27:56.133015 2252 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:27:56.133317 kubelet[2252]: W1213 13:27:56.133306 2252 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:27:56.134290 kubelet[2252]: I1213 13:27:56.134269 2252 server.go:1256] "Started kubelet" Dec 13 13:27:56.134446 kubelet[2252]: I1213 13:27:56.134408 2252 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:27:56.134830 kubelet[2252]: I1213 13:27:56.134808 2252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:27:56.135161 kubelet[2252]: I1213 13:27:56.135142 2252 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:27:56.135707 kubelet[2252]: I1213 13:27:56.135660 2252 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:27:56.138725 kubelet[2252]: I1213 13:27:56.137810 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:27:56.138725 kubelet[2252]: I1213 13:27:56.137992 2252 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:27:56.138725 kubelet[2252]: I1213 13:27:56.138088 2252 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:27:56.138725 kubelet[2252]: I1213 13:27:56.138132 2252 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:27:56.138725 kubelet[2252]: W1213 13:27:56.138370 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.138725 kubelet[2252]: E1213 13:27:56.138408 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.138725 kubelet[2252]: E1213 13:27:56.138470 2252 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:27:56.138725 kubelet[2252]: E1213 13:27:56.138667 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Dec 13 13:27:56.139610 kubelet[2252]: E1213 13:27:56.139440 2252 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:27:56.141398 kubelet[2252]: I1213 13:27:56.140856 2252 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:27:56.141398 kubelet[2252]: I1213 13:27:56.140874 2252 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:27:56.141398 kubelet[2252]: I1213 13:27:56.140942 2252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:27:56.146045 kubelet[2252]: E1213 13:27:56.146012 2252 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf8eb23cfc2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:27:56.134243373 +0000 UTC m=+1.340142822,LastTimestamp:2024-12-13 13:27:56.134243373 +0000 UTC m=+1.340142822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:27:56.152552 kubelet[2252]: I1213 13:27:56.152520 2252 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:27:56.152552 kubelet[2252]: I1213 13:27:56.152540 2252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:27:56.152644 kubelet[2252]: I1213 13:27:56.152560 2252 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:27:56.154079 kubelet[2252]: I1213 13:27:56.154044 2252 policy_none.go:49] "None policy: Start" Dec 13 13:27:56.154629 kubelet[2252]: I1213 13:27:56.154606 2252 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:27:56.154688 kubelet[2252]: I1213 13:27:56.154649 2252 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:27:56.158324 kubelet[2252]: I1213 13:27:56.158296 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:27:56.160357 kubelet[2252]: I1213 13:27:56.160253 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:27:56.160357 kubelet[2252]: I1213 13:27:56.160283 2252 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:27:56.160357 kubelet[2252]: I1213 13:27:56.160308 2252 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:27:56.160357 kubelet[2252]: E1213 13:27:56.160355 2252 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:27:56.161775 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:27:56.162184 kubelet[2252]: W1213 13:27:56.162117 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.162184 kubelet[2252]: E1213 13:27:56.162166 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:56.175308 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:27:56.178144 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:27:56.190237 kubelet[2252]: I1213 13:27:56.190190 2252 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:27:56.190572 kubelet[2252]: I1213 13:27:56.190470 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:27:56.191526 kubelet[2252]: E1213 13:27:56.191505 2252 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:27:56.239664 kubelet[2252]: I1213 13:27:56.239637 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:27:56.241944 kubelet[2252]: E1213 13:27:56.241911 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Dec 13 13:27:56.261128 kubelet[2252]: I1213 13:27:56.261092 2252 topology_manager.go:215] "Topology Admit Handler" podUID="475c5447ac86147a1ce7bb90158e2860" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:27:56.262345 kubelet[2252]: I1213 13:27:56.262318 2252 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:27:56.263441 kubelet[2252]: I1213 13:27:56.263342 2252 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:27:56.268629 systemd[1]: Created slice kubepods-burstable-pod475c5447ac86147a1ce7bb90158e2860.slice - libcontainer container kubepods-burstable-pod475c5447ac86147a1ce7bb90158e2860.slice. Dec 13 13:27:56.280764 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 13:27:56.294599 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 13:27:56.339471 kubelet[2252]: E1213 13:27:56.339357 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Dec 13 13:27:56.439782 kubelet[2252]: I1213 13:27:56.439728 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:27:56.439868 kubelet[2252]: I1213 13:27:56.439794 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:27:56.439868 kubelet[2252]: I1213 13:27:56.439819 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:27:56.439868 kubelet[2252]: I1213 13:27:56.439839 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:27:56.439868 kubelet[2252]: I1213 13:27:56.439859 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:27:56.439957 kubelet[2252]: I1213 13:27:56.439877 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:27:56.439957 kubelet[2252]: I1213 13:27:56.439897 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:27:56.439957 kubelet[2252]: I1213 13:27:56.439915 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:27:56.439957 kubelet[2252]: I1213 13:27:56.439934 2252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:27:56.443751 kubelet[2252]: I1213 13:27:56.443717 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:27:56.444125 kubelet[2252]: E1213 13:27:56.444098 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Dec 13 13:27:56.581547 kubelet[2252]: E1213 13:27:56.581510 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:56.582238 containerd[1446]: time="2024-12-13T13:27:56.582203632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:475c5447ac86147a1ce7bb90158e2860,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:56.593500 kubelet[2252]: E1213 13:27:56.593422 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:56.593820 containerd[1446]: time="2024-12-13T13:27:56.593781516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:56.597331 kubelet[2252]: E1213 13:27:56.597240 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:56.597567 containerd[1446]: time="2024-12-13T13:27:56.597537710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:56.739985 kubelet[2252]: E1213 13:27:56.739952 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Dec 13 13:27:56.845764 kubelet[2252]: I1213 13:27:56.845672 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:27:56.846058 kubelet[2252]: E1213 13:27:56.846016 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Dec 13 13:27:57.079018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515737844.mount: Deactivated successfully. Dec 13 13:27:57.083652 containerd[1446]: time="2024-12-13T13:27:57.083606997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:57.085852 containerd[1446]: time="2024-12-13T13:27:57.085815120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:27:57.087704 containerd[1446]: time="2024-12-13T13:27:57.087650766Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:57.088967 containerd[1446]: time="2024-12-13T13:27:57.088920711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:57.089697 containerd[1446]: time="2024-12-13T13:27:57.089667226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:57.090202 containerd[1446]: time="2024-12-13T13:27:57.090173083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:27:57.090921 containerd[1446]: time="2024-12-13T13:27:57.090764923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:27:57.092779 containerd[1446]: time="2024-12-13T13:27:57.092746758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:57.093693 containerd[1446]: time="2024-12-13T13:27:57.093656955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.377019ms" Dec 13 13:27:57.094943 containerd[1446]: time="2024-12-13T13:27:57.094906244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.312888ms" Dec 13 13:27:57.097095 containerd[1446]: time="2024-12-13T13:27:57.096972262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.129013ms" Dec 13 13:27:57.228011 containerd[1446]: time="2024-12-13T13:27:57.227819664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:57.228011 containerd[1446]: time="2024-12-13T13:27:57.227890636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:57.228011 containerd[1446]: time="2024-12-13T13:27:57.227901565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.228011 containerd[1446]: time="2024-12-13T13:27:57.227980063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.229598 containerd[1446]: time="2024-12-13T13:27:57.229329107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:57.229598 containerd[1446]: time="2024-12-13T13:27:57.229377503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:57.229598 containerd[1446]: time="2024-12-13T13:27:57.229402281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.229750 containerd[1446]: time="2024-12-13T13:27:57.229581455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:57.229750 containerd[1446]: time="2024-12-13T13:27:57.229654029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:57.229832 containerd[1446]: time="2024-12-13T13:27:57.229512243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.231625 containerd[1446]: time="2024-12-13T13:27:57.229758226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.231625 containerd[1446]: time="2024-12-13T13:27:57.231509609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:57.250607 systemd[1]: Started cri-containerd-4249db3cb843d2f9836a1b3827bd7c5bbc9d2af4c72bd1b4969b297b268f5def.scope - libcontainer container 4249db3cb843d2f9836a1b3827bd7c5bbc9d2af4c72bd1b4969b297b268f5def. Dec 13 13:27:57.251962 systemd[1]: Started cri-containerd-ea8c8006a1a31d3ec35f96911c8b1391b8a16bfc32eee7e4fef1bf788e8733c1.scope - libcontainer container ea8c8006a1a31d3ec35f96911c8b1391b8a16bfc32eee7e4fef1bf788e8733c1. Dec 13 13:27:57.254715 systemd[1]: Started cri-containerd-ecb6d9ec26c785431b14a07ace019bb14347f1909a8e2be98b7eb84dff140e8d.scope - libcontainer container ecb6d9ec26c785431b14a07ace019bb14347f1909a8e2be98b7eb84dff140e8d. Dec 13 13:27:57.281863 containerd[1446]: time="2024-12-13T13:27:57.281765564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:475c5447ac86147a1ce7bb90158e2860,Namespace:kube-system,Attempt:0,} returns sandbox id \"4249db3cb843d2f9836a1b3827bd7c5bbc9d2af4c72bd1b4969b297b268f5def\"" Dec 13 13:27:57.282862 kubelet[2252]: E1213 13:27:57.282841 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:57.285539 containerd[1446]: time="2024-12-13T13:27:57.285506828Z" level=info msg="CreateContainer within sandbox \"4249db3cb843d2f9836a1b3827bd7c5bbc9d2af4c72bd1b4969b297b268f5def\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:27:57.289342 containerd[1446]: time="2024-12-13T13:27:57.289279595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea8c8006a1a31d3ec35f96911c8b1391b8a16bfc32eee7e4fef1bf788e8733c1\"" Dec 13 13:27:57.290004 kubelet[2252]: E1213 13:27:57.289983 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:57.291863 containerd[1446]: time="2024-12-13T13:27:57.291834136Z" level=info msg="CreateContainer within sandbox \"ea8c8006a1a31d3ec35f96911c8b1391b8a16bfc32eee7e4fef1bf788e8733c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:27:57.294182 containerd[1446]: time="2024-12-13T13:27:57.294155343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecb6d9ec26c785431b14a07ace019bb14347f1909a8e2be98b7eb84dff140e8d\"" Dec 13 13:27:57.294855 kubelet[2252]: E1213 13:27:57.294831 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:57.296621 containerd[1446]: time="2024-12-13T13:27:57.296578906Z" level=info msg="CreateContainer within sandbox \"ecb6d9ec26c785431b14a07ace019bb14347f1909a8e2be98b7eb84dff140e8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:27:57.300593 containerd[1446]: time="2024-12-13T13:27:57.300512634Z" level=info msg="CreateContainer within sandbox \"4249db3cb843d2f9836a1b3827bd7c5bbc9d2af4c72bd1b4969b297b268f5def\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18c7df8312e686514ab5d972955e9572c96e34ad52770e3a32ea3b17f932059e\"" Dec 13 13:27:57.301049 containerd[1446]: time="2024-12-13T13:27:57.301019651Z" level=info msg="StartContainer for \"18c7df8312e686514ab5d972955e9572c96e34ad52770e3a32ea3b17f932059e\"" Dec 13 13:27:57.306391 containerd[1446]: time="2024-12-13T13:27:57.306332844Z" level=info msg="CreateContainer within sandbox \"ea8c8006a1a31d3ec35f96911c8b1391b8a16bfc32eee7e4fef1bf788e8733c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7bdf9ae207311310b561cb159a3c9a0a58e06d44fc37644f14e03f2bdb6699f5\"" Dec 13 13:27:57.306853 containerd[1446]: time="2024-12-13T13:27:57.306827893Z" level=info msg="StartContainer for \"7bdf9ae207311310b561cb159a3c9a0a58e06d44fc37644f14e03f2bdb6699f5\"" Dec 13 13:27:57.312411 containerd[1446]: time="2024-12-13T13:27:57.312319859Z" level=info msg="CreateContainer within sandbox \"ecb6d9ec26c785431b14a07ace019bb14347f1909a8e2be98b7eb84dff140e8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6623d1d04a145de7004745228f10e8855b7f40cd1f0f84ae9eb04aa1eca3c666\"" Dec 13 13:27:57.312839 containerd[1446]: time="2024-12-13T13:27:57.312812946Z" level=info msg="StartContainer for \"6623d1d04a145de7004745228f10e8855b7f40cd1f0f84ae9eb04aa1eca3c666\"" Dec 13 13:27:57.324597 systemd[1]: Started cri-containerd-18c7df8312e686514ab5d972955e9572c96e34ad52770e3a32ea3b17f932059e.scope - libcontainer container 18c7df8312e686514ab5d972955e9572c96e34ad52770e3a32ea3b17f932059e. Dec 13 13:27:57.326821 systemd[1]: Started cri-containerd-7bdf9ae207311310b561cb159a3c9a0a58e06d44fc37644f14e03f2bdb6699f5.scope - libcontainer container 7bdf9ae207311310b561cb159a3c9a0a58e06d44fc37644f14e03f2bdb6699f5. Dec 13 13:27:57.334855 systemd[1]: Started cri-containerd-6623d1d04a145de7004745228f10e8855b7f40cd1f0f84ae9eb04aa1eca3c666.scope - libcontainer container 6623d1d04a145de7004745228f10e8855b7f40cd1f0f84ae9eb04aa1eca3c666. Dec 13 13:27:57.345308 kubelet[2252]: W1213 13:27:57.345257 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.345374 kubelet[2252]: E1213 13:27:57.345315 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.359085 containerd[1446]: time="2024-12-13T13:27:57.358997912Z" level=info msg="StartContainer for \"18c7df8312e686514ab5d972955e9572c96e34ad52770e3a32ea3b17f932059e\" returns successfully" Dec 13 13:27:57.365593 containerd[1446]: time="2024-12-13T13:27:57.365537578Z" level=info msg="StartContainer for \"7bdf9ae207311310b561cb159a3c9a0a58e06d44fc37644f14e03f2bdb6699f5\" returns successfully" Dec 13 13:27:57.411798 containerd[1446]: time="2024-12-13T13:27:57.410614439Z" level=info msg="StartContainer for \"6623d1d04a145de7004745228f10e8855b7f40cd1f0f84ae9eb04aa1eca3c666\" returns successfully" Dec 13 13:27:57.443296 kubelet[2252]: W1213 13:27:57.443189 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.443296 kubelet[2252]: E1213 13:27:57.443250 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.473270 kubelet[2252]: W1213 13:27:57.473183 2252 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.473270 kubelet[2252]: E1213 13:27:57.473253 2252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Dec 13 13:27:57.542226 kubelet[2252]: E1213 13:27:57.542181 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="1.6s" Dec 13 13:27:57.647722 kubelet[2252]: I1213 13:27:57.647105 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:27:58.171501 kubelet[2252]: E1213 13:27:58.171320 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:58.172329 kubelet[2252]: E1213 13:27:58.172281 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:58.173098 kubelet[2252]: E1213 13:27:58.173013 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:59.172815 kubelet[2252]: E1213 13:27:59.172791 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:59.187803 kubelet[2252]: E1213 13:27:59.187758 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:59.681010 kubelet[2252]: I1213 13:27:59.680975 2252 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:28:00.133284 kubelet[2252]: I1213 13:28:00.133162 2252 apiserver.go:52] "Watching apiserver" Dec 13 13:28:00.138266 kubelet[2252]: I1213 13:28:00.138222 2252 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:28:01.921602 kubelet[2252]: E1213 13:28:01.921542 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:02.069401 kubelet[2252]: E1213 13:28:02.069358 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:02.175232 kubelet[2252]: E1213 13:28:02.175092 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:02.175232 kubelet[2252]: E1213 13:28:02.175215 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:02.409776 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-7.scope)... Dec 13 13:28:02.409792 systemd[1]: Reloading... Dec 13 13:28:02.458581 zram_generator::config[2571]: No configuration found. Dec 13 13:28:02.537180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:02.598871 systemd[1]: Reloading finished in 188 ms. Dec 13 13:28:02.632656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:02.637394 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:28:02.637712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:02.637764 systemd[1]: kubelet.service: Consumed 1.738s CPU time, 114.7M memory peak, 0B memory swap peak. Dec 13 13:28:02.645751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:02.731323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:02.735548 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:02.782032 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:02.782032 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:02.782032 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:02.782431 kubelet[2613]: I1213 13:28:02.782079 2613 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:02.785841 kubelet[2613]: I1213 13:28:02.785803 2613 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:28:02.785841 kubelet[2613]: I1213 13:28:02.785830 2613 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:02.786075 kubelet[2613]: I1213 13:28:02.786051 2613 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:28:02.787742 kubelet[2613]: I1213 13:28:02.787580 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:28:02.791115 kubelet[2613]: I1213 13:28:02.790137 2613 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:02.796869 kubelet[2613]: I1213 13:28:02.796832 2613 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:02.797030 kubelet[2613]: I1213 13:28:02.797007 2613 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:02.797211 kubelet[2613]: I1213 13:28:02.797182 2613 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:28:02.797211 kubelet[2613]: I1213 13:28:02.797206 2613 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:02.797211 kubelet[2613]: I1213 13:28:02.797214 2613 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.797242 2613 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.797334 2613 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.797346 2613 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.797366 2613 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.797376 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.798337 2613 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:02.798671 kubelet[2613]: I1213 13:28:02.798532 2613 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.798904 2613 server.go:1256] "Started kubelet" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.799868 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.800024 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.800072 2613 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.800120 2613 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.800861 2613 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.801274 2613 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:28:02.801477 kubelet[2613]: I1213 13:28:02.801379 2613 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:28:02.801689 kubelet[2613]: I1213 13:28:02.801526 2613 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:28:02.812996 kubelet[2613]: E1213 13:28:02.812963 2613 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:02.817853 sudo[2630]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:28:02.818130 sudo[2630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:28:02.818913 kubelet[2613]: I1213 13:28:02.818744 2613 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:02.818913 kubelet[2613]: I1213 13:28:02.818830 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:02.825146 kubelet[2613]: I1213 13:28:02.824527 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:02.825786 kubelet[2613]: I1213 13:28:02.825763 2613 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:02.826080 kubelet[2613]: I1213 13:28:02.826048 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:02.826080 kubelet[2613]: I1213 13:28:02.826078 2613 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:02.826150 kubelet[2613]: I1213 13:28:02.826097 2613 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:28:02.826150 kubelet[2613]: E1213 13:28:02.826146 2613 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:28:02.858071 kubelet[2613]: I1213 13:28:02.858047 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:02.858071 kubelet[2613]: I1213 13:28:02.858067 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:02.858199 kubelet[2613]: I1213 13:28:02.858084 2613 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:02.858250 kubelet[2613]: I1213 13:28:02.858234 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:28:02.858278 kubelet[2613]: I1213 13:28:02.858259 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:28:02.858278 kubelet[2613]: I1213 13:28:02.858267 2613 policy_none.go:49] "None policy: Start" Dec 13 13:28:02.858949 kubelet[2613]: I1213 13:28:02.858850 2613 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:02.858949 kubelet[2613]: I1213 13:28:02.858874 2613 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:02.859410 kubelet[2613]: I1213 13:28:02.859276 2613 state_mem.go:75] "Updated machine memory state" Dec 13 13:28:02.867178 kubelet[2613]: I1213 13:28:02.867156 2613 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:02.867880 kubelet[2613]: I1213 13:28:02.867365 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:02.905612 kubelet[2613]: I1213 13:28:02.905399 2613 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:28:02.912512 kubelet[2613]: I1213 13:28:02.912488 2613 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:28:02.912610 kubelet[2613]: I1213 13:28:02.912562 2613 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:28:02.926764 kubelet[2613]: I1213 13:28:02.926733 2613 topology_manager.go:215] "Topology Admit Handler" podUID="475c5447ac86147a1ce7bb90158e2860" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:28:02.926881 kubelet[2613]: I1213 13:28:02.926816 2613 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:28:02.926881 kubelet[2613]: I1213 13:28:02.926866 2613 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:28:02.932386 kubelet[2613]: E1213 13:28:02.932297 2613 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:02.932386 kubelet[2613]: E1213 13:28:02.932333 2613 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102724 kubelet[2613]: I1213 13:28:03.102614 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102724 kubelet[2613]: I1213 13:28:03.102662 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102724 kubelet[2613]: I1213 13:28:03.102683 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102724 kubelet[2613]: I1213 13:28:03.102715 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102890 kubelet[2613]: I1213 13:28:03.102741 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:03.102890 kubelet[2613]: I1213 13:28:03.102763 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:28:03.102890 kubelet[2613]: I1213 13:28:03.102782 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:03.102890 kubelet[2613]: I1213 13:28:03.102801 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:03.102890 kubelet[2613]: I1213 13:28:03.102823 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/475c5447ac86147a1ce7bb90158e2860-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"475c5447ac86147a1ce7bb90158e2860\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:03.232184 kubelet[2613]: E1213 13:28:03.232149 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.232923 kubelet[2613]: E1213 13:28:03.232808 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.232923 kubelet[2613]: E1213 13:28:03.232813 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.265974 sudo[2630]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:03.798808 kubelet[2613]: I1213 13:28:03.798759 2613 apiserver.go:52] "Watching apiserver" Dec 13 13:28:03.802182 kubelet[2613]: I1213 13:28:03.802153 2613 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:28:03.845814 kubelet[2613]: E1213 13:28:03.845042 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.845814 kubelet[2613]: E1213 13:28:03.845107 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.848364 kubelet[2613]: E1213 13:28:03.848325 2613 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:03.848751 kubelet[2613]: E1213 13:28:03.848734 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:03.861479 kubelet[2613]: I1213 13:28:03.861421 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.861372445 podStartE2EDuration="1.861372445s" podCreationTimestamp="2024-12-13 13:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:03.86125172 +0000 UTC m=+1.122618443" watchObservedRunningTime="2024-12-13 13:28:03.861372445 +0000 UTC m=+1.122739168" Dec 13 13:28:03.878471 kubelet[2613]: I1213 13:28:03.876124 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.876084847 podStartE2EDuration="1.876084847s" podCreationTimestamp="2024-12-13 13:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:03.867753743 +0000 UTC m=+1.129120466" watchObservedRunningTime="2024-12-13 13:28:03.876084847 +0000 UTC m=+1.137451530" Dec 13 13:28:03.893992 kubelet[2613]: I1213 13:28:03.893940 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.893901286 podStartE2EDuration="2.893901286s" podCreationTimestamp="2024-12-13 13:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:03.876392242 +0000 UTC m=+1.137758965" watchObservedRunningTime="2024-12-13 13:28:03.893901286 +0000 UTC m=+1.155268009" Dec 13 13:28:04.846131 kubelet[2613]: E1213 13:28:04.846077 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:05.727340 sudo[1622]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:05.728738 sshd[1621]: Connection closed by 10.0.0.1 port 58488 Dec 13 13:28:05.729216 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:05.732358 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:58488.service: Deactivated successfully. Dec 13 13:28:05.733841 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:28:05.734570 systemd[1]: session-7.scope: Consumed 7.183s CPU time, 191.2M memory peak, 0B memory swap peak. Dec 13 13:28:05.735188 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:28:05.736470 systemd-logind[1423]: Removed session 7. Dec 13 13:28:07.202342 kubelet[2613]: E1213 13:28:07.202297 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:07.849868 kubelet[2613]: E1213 13:28:07.849766 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:10.696225 kubelet[2613]: E1213 13:28:10.696197 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:10.853371 kubelet[2613]: E1213 13:28:10.853054 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:11.897183 kubelet[2613]: E1213 13:28:11.897106 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:12.855295 kubelet[2613]: E1213 13:28:12.855252 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.111919 update_engine[1425]: I20241213 13:28:16.111840 1425 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:16.137563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2694) Dec 13 13:28:16.162479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2693) Dec 13 13:28:17.118200 kubelet[2613]: I1213 13:28:17.118163 2613 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:28:17.126847 containerd[1446]: time="2024-12-13T13:28:17.126736002Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:28:17.127239 kubelet[2613]: I1213 13:28:17.126997 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:28:18.021779 kubelet[2613]: I1213 13:28:18.019726 2613 topology_manager.go:215] "Topology Admit Handler" podUID="3e6a3350-ee08-4a34-9bc9-2bc7facbccb8" podNamespace="kube-system" podName="kube-proxy-rdgph" Dec 13 13:28:18.022925 kubelet[2613]: I1213 13:28:18.022898 2613 topology_manager.go:215] "Topology Admit Handler" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" podNamespace="kube-system" podName="cilium-6mb22" Dec 13 13:28:18.034668 systemd[1]: Created slice kubepods-besteffort-pod3e6a3350_ee08_4a34_9bc9_2bc7facbccb8.slice - libcontainer container kubepods-besteffort-pod3e6a3350_ee08_4a34_9bc9_2bc7facbccb8.slice. Dec 13 13:28:18.055125 systemd[1]: Created slice kubepods-burstable-pod8b4cc30c_e86e_42b9_a44a_84d99adf757f.slice - libcontainer container kubepods-burstable-pod8b4cc30c_e86e_42b9_a44a_84d99adf757f.slice. Dec 13 13:28:18.108253 kubelet[2613]: I1213 13:28:18.108193 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8f6x\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-kube-api-access-s8f6x\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108253 kubelet[2613]: I1213 13:28:18.108240 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-etc-cni-netd\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108253 kubelet[2613]: I1213 13:28:18.108264 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-kernel\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108285 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hostproc\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108350 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b4cc30c-e86e-42b9-a44a-84d99adf757f-clustermesh-secrets\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108424 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-bpf-maps\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108464 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hubble-tls\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108493 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e6a3350-ee08-4a34-9bc9-2bc7facbccb8-kube-proxy\") pod \"kube-proxy-rdgph\" (UID: \"3e6a3350-ee08-4a34-9bc9-2bc7facbccb8\") " pod="kube-system/kube-proxy-rdgph" Dec 13 13:28:18.108607 kubelet[2613]: I1213 13:28:18.108515 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-cgroup\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108534 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-config-path\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108553 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e6a3350-ee08-4a34-9bc9-2bc7facbccb8-lib-modules\") pod \"kube-proxy-rdgph\" (UID: \"3e6a3350-ee08-4a34-9bc9-2bc7facbccb8\") " pod="kube-system/kube-proxy-rdgph" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108588 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-run\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108617 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cni-path\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108639 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-net\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108746 kubelet[2613]: I1213 13:28:18.108667 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-lib-modules\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108868 kubelet[2613]: I1213 13:28:18.108691 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-xtables-lock\") pod \"cilium-6mb22\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " pod="kube-system/cilium-6mb22" Dec 13 13:28:18.108868 kubelet[2613]: I1213 13:28:18.108725 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a3350-ee08-4a34-9bc9-2bc7facbccb8-xtables-lock\") pod \"kube-proxy-rdgph\" (UID: \"3e6a3350-ee08-4a34-9bc9-2bc7facbccb8\") " pod="kube-system/kube-proxy-rdgph" Dec 13 13:28:18.108868 kubelet[2613]: I1213 13:28:18.108788 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc5p7\" (UniqueName: \"kubernetes.io/projected/3e6a3350-ee08-4a34-9bc9-2bc7facbccb8-kube-api-access-pc5p7\") pod \"kube-proxy-rdgph\" (UID: \"3e6a3350-ee08-4a34-9bc9-2bc7facbccb8\") " pod="kube-system/kube-proxy-rdgph" Dec 13 13:28:18.163535 kubelet[2613]: I1213 13:28:18.163464 2613 topology_manager.go:215] "Topology Admit Handler" podUID="5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" podNamespace="kube-system" podName="cilium-operator-5cc964979-djmz4" Dec 13 13:28:18.175690 systemd[1]: Created slice kubepods-besteffort-pod5c9eb5d5_05bc_49fb_a3f7_6b4b98e2fd08.slice - libcontainer container kubepods-besteffort-pod5c9eb5d5_05bc_49fb_a3f7_6b4b98e2fd08.slice. Dec 13 13:28:18.209865 kubelet[2613]: I1213 13:28:18.209686 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-cilium-config-path\") pod \"cilium-operator-5cc964979-djmz4\" (UID: \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\") " pod="kube-system/cilium-operator-5cc964979-djmz4" Dec 13 13:28:18.209865 kubelet[2613]: I1213 13:28:18.209785 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztc8t\" (UniqueName: \"kubernetes.io/projected/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-kube-api-access-ztc8t\") pod \"cilium-operator-5cc964979-djmz4\" (UID: \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\") " pod="kube-system/cilium-operator-5cc964979-djmz4" Dec 13 13:28:18.353908 kubelet[2613]: E1213 13:28:18.353593 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.354923 containerd[1446]: time="2024-12-13T13:28:18.354142992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdgph,Uid:3e6a3350-ee08-4a34-9bc9-2bc7facbccb8,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:18.359499 kubelet[2613]: E1213 13:28:18.359437 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.360583 containerd[1446]: time="2024-12-13T13:28:18.360533837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mb22,Uid:8b4cc30c-e86e-42b9-a44a-84d99adf757f,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:18.382352 containerd[1446]: time="2024-12-13T13:28:18.381946180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:18.382352 containerd[1446]: time="2024-12-13T13:28:18.382022112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:18.382352 containerd[1446]: time="2024-12-13T13:28:18.382034074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.382352 containerd[1446]: time="2024-12-13T13:28:18.382131610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.384042 containerd[1446]: time="2024-12-13T13:28:18.383931385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:18.384042 containerd[1446]: time="2024-12-13T13:28:18.383979312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:18.384042 containerd[1446]: time="2024-12-13T13:28:18.383997555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.384189 containerd[1446]: time="2024-12-13T13:28:18.384081969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.405636 systemd[1]: Started cri-containerd-bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256.scope - libcontainer container bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256. Dec 13 13:28:18.406967 systemd[1]: Started cri-containerd-c4c8c3500cd37a7c788f1487f946437ea6abb3cc7cb1077288bf0a0811278898.scope - libcontainer container c4c8c3500cd37a7c788f1487f946437ea6abb3cc7cb1077288bf0a0811278898. Dec 13 13:28:18.427791 containerd[1446]: time="2024-12-13T13:28:18.427723548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mb22,Uid:8b4cc30c-e86e-42b9-a44a-84d99adf757f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\"" Dec 13 13:28:18.428736 kubelet[2613]: E1213 13:28:18.428425 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.433928 containerd[1446]: time="2024-12-13T13:28:18.433828907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdgph,Uid:3e6a3350-ee08-4a34-9bc9-2bc7facbccb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4c8c3500cd37a7c788f1487f946437ea6abb3cc7cb1077288bf0a0811278898\"" Dec 13 13:28:18.434444 kubelet[2613]: E1213 13:28:18.434407 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.434895 containerd[1446]: time="2024-12-13T13:28:18.434838192Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:28:18.437303 containerd[1446]: time="2024-12-13T13:28:18.437263789Z" level=info msg="CreateContainer within sandbox \"c4c8c3500cd37a7c788f1487f946437ea6abb3cc7cb1077288bf0a0811278898\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:28:18.467925 containerd[1446]: time="2024-12-13T13:28:18.467877717Z" level=info msg="CreateContainer within sandbox \"c4c8c3500cd37a7c788f1487f946437ea6abb3cc7cb1077288bf0a0811278898\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"57e695d21a1f3a7d48b83578ca312c778ebb588f629666ef25ea8ff588f4f2ee\"" Dec 13 13:28:18.468763 containerd[1446]: time="2024-12-13T13:28:18.468736017Z" level=info msg="StartContainer for \"57e695d21a1f3a7d48b83578ca312c778ebb588f629666ef25ea8ff588f4f2ee\"" Dec 13 13:28:18.479641 kubelet[2613]: E1213 13:28:18.479616 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.480151 containerd[1446]: time="2024-12-13T13:28:18.480118719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-djmz4,Uid:5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:18.491612 systemd[1]: Started cri-containerd-57e695d21a1f3a7d48b83578ca312c778ebb588f629666ef25ea8ff588f4f2ee.scope - libcontainer container 57e695d21a1f3a7d48b83578ca312c778ebb588f629666ef25ea8ff588f4f2ee. Dec 13 13:28:18.503957 containerd[1446]: time="2024-12-13T13:28:18.503576316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:18.503957 containerd[1446]: time="2024-12-13T13:28:18.503650528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:18.503957 containerd[1446]: time="2024-12-13T13:28:18.503666931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.503957 containerd[1446]: time="2024-12-13T13:28:18.503773389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:18.526610 systemd[1]: Started cri-containerd-814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329.scope - libcontainer container 814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329. Dec 13 13:28:18.533211 containerd[1446]: time="2024-12-13T13:28:18.532593703Z" level=info msg="StartContainer for \"57e695d21a1f3a7d48b83578ca312c778ebb588f629666ef25ea8ff588f4f2ee\" returns successfully" Dec 13 13:28:18.557521 containerd[1446]: time="2024-12-13T13:28:18.557476253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-djmz4,Uid:5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08,Namespace:kube-system,Attempt:0,} returns sandbox id \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\"" Dec 13 13:28:18.558496 kubelet[2613]: E1213 13:28:18.558404 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.871702 kubelet[2613]: E1213 13:28:18.871671 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:22.820651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743457901.mount: Deactivated successfully. Dec 13 13:28:24.026745 containerd[1446]: time="2024-12-13T13:28:24.026696894Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:24.027228 containerd[1446]: time="2024-12-13T13:28:24.027179833Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651594" Dec 13 13:28:24.027883 containerd[1446]: time="2024-12-13T13:28:24.027858677Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:24.029344 containerd[1446]: time="2024-12-13T13:28:24.029281413Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.594413096s" Dec 13 13:28:24.029344 containerd[1446]: time="2024-12-13T13:28:24.029311336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 13:28:24.034187 containerd[1446]: time="2024-12-13T13:28:24.033999875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:28:24.035522 containerd[1446]: time="2024-12-13T13:28:24.035433932Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:28:24.049314 containerd[1446]: time="2024-12-13T13:28:24.049272000Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\"" Dec 13 13:28:24.049866 containerd[1446]: time="2024-12-13T13:28:24.049842751Z" level=info msg="StartContainer for \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\"" Dec 13 13:28:24.087669 systemd[1]: Started cri-containerd-2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601.scope - libcontainer container 2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601. Dec 13 13:28:24.149433 containerd[1446]: time="2024-12-13T13:28:24.149369958Z" level=info msg="StartContainer for \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\" returns successfully" Dec 13 13:28:24.152480 systemd[1]: cri-containerd-2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601.scope: Deactivated successfully. Dec 13 13:28:24.239042 containerd[1446]: time="2024-12-13T13:28:24.233942998Z" level=info msg="shim disconnected" id=2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601 namespace=k8s.io Dec 13 13:28:24.239042 containerd[1446]: time="2024-12-13T13:28:24.238918892Z" level=warning msg="cleaning up after shim disconnected" id=2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601 namespace=k8s.io Dec 13 13:28:24.239042 containerd[1446]: time="2024-12-13T13:28:24.238930374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:24.894315 kubelet[2613]: E1213 13:28:24.894219 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:24.897949 containerd[1446]: time="2024-12-13T13:28:24.897901924Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:28:24.910678 containerd[1446]: time="2024-12-13T13:28:24.910608493Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\"" Dec 13 13:28:24.911167 containerd[1446]: time="2024-12-13T13:28:24.911105314Z" level=info msg="StartContainer for \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\"" Dec 13 13:28:24.915137 kubelet[2613]: I1213 13:28:24.915029 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rdgph" podStartSLOduration=6.914966951 podStartE2EDuration="6.914966951s" podCreationTimestamp="2024-12-13 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:18.882511783 +0000 UTC m=+16.143878506" watchObservedRunningTime="2024-12-13 13:28:24.914966951 +0000 UTC m=+22.176333674" Dec 13 13:28:24.942396 systemd[1]: Started cri-containerd-0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858.scope - libcontainer container 0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858. Dec 13 13:28:24.962316 containerd[1446]: time="2024-12-13T13:28:24.962220864Z" level=info msg="StartContainer for \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\" returns successfully" Dec 13 13:28:24.977724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:28:24.977928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:24.978002 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:24.987194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:24.987384 systemd[1]: cri-containerd-0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858.scope: Deactivated successfully. Dec 13 13:28:25.003683 containerd[1446]: time="2024-12-13T13:28:25.003621244Z" level=info msg="shim disconnected" id=0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858 namespace=k8s.io Dec 13 13:28:25.003855 containerd[1446]: time="2024-12-13T13:28:25.003688691Z" level=warning msg="cleaning up after shim disconnected" id=0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858 namespace=k8s.io Dec 13 13:28:25.003855 containerd[1446]: time="2024-12-13T13:28:25.003698053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:25.011696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:25.014164 containerd[1446]: time="2024-12-13T13:28:25.014117964Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:28:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:28:25.046221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601-rootfs.mount: Deactivated successfully. Dec 13 13:28:25.670821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592414414.mount: Deactivated successfully. Dec 13 13:28:25.894991 kubelet[2613]: E1213 13:28:25.894944 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:25.900080 containerd[1446]: time="2024-12-13T13:28:25.899968463Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:28:25.920158 containerd[1446]: time="2024-12-13T13:28:25.920114523Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\"" Dec 13 13:28:25.920975 containerd[1446]: time="2024-12-13T13:28:25.920720315Z" level=info msg="StartContainer for \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\"" Dec 13 13:28:25.944632 systemd[1]: Started cri-containerd-cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75.scope - libcontainer container cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75. Dec 13 13:28:25.976768 containerd[1446]: time="2024-12-13T13:28:25.976705569Z" level=info msg="StartContainer for \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\" returns successfully" Dec 13 13:28:25.988734 systemd[1]: cri-containerd-cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75.scope: Deactivated successfully. Dec 13 13:28:26.067531 containerd[1446]: time="2024-12-13T13:28:26.067288544Z" level=info msg="shim disconnected" id=cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75 namespace=k8s.io Dec 13 13:28:26.067531 containerd[1446]: time="2024-12-13T13:28:26.067343670Z" level=warning msg="cleaning up after shim disconnected" id=cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75 namespace=k8s.io Dec 13 13:28:26.067531 containerd[1446]: time="2024-12-13T13:28:26.067351871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:26.111595 containerd[1446]: time="2024-12-13T13:28:26.111542992Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:26.112086 containerd[1446]: time="2024-12-13T13:28:26.112040249Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138318" Dec 13 13:28:26.112962 containerd[1446]: time="2024-12-13T13:28:26.112937470Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:26.114439 containerd[1446]: time="2024-12-13T13:28:26.114348310Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.080311151s" Dec 13 13:28:26.114439 containerd[1446]: time="2024-12-13T13:28:26.114380314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 13:28:26.118237 containerd[1446]: time="2024-12-13T13:28:26.118066931Z" level=info msg="CreateContainer within sandbox \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:28:26.126562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358800699.mount: Deactivated successfully. Dec 13 13:28:26.127273 containerd[1446]: time="2024-12-13T13:28:26.127241289Z" level=info msg="CreateContainer within sandbox \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\"" Dec 13 13:28:26.128056 containerd[1446]: time="2024-12-13T13:28:26.127826155Z" level=info msg="StartContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\"" Dec 13 13:28:26.152610 systemd[1]: Started cri-containerd-90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398.scope - libcontainer container 90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398. Dec 13 13:28:26.172594 containerd[1446]: time="2024-12-13T13:28:26.172491330Z" level=info msg="StartContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" returns successfully" Dec 13 13:28:26.903316 kubelet[2613]: E1213 13:28:26.902821 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:26.916355 kubelet[2613]: E1213 13:28:26.916312 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:26.916392 containerd[1446]: time="2024-12-13T13:28:26.915730445Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:28:26.929928 containerd[1446]: time="2024-12-13T13:28:26.929818719Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\"" Dec 13 13:28:26.930471 containerd[1446]: time="2024-12-13T13:28:26.930429788Z" level=info msg="StartContainer for \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\"" Dec 13 13:28:26.934107 kubelet[2613]: I1213 13:28:26.933751 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-djmz4" podStartSLOduration=1.378919704 podStartE2EDuration="8.933695598s" podCreationTimestamp="2024-12-13 13:28:18 +0000 UTC" firstStartedPulling="2024-12-13 13:28:18.559976222 +0000 UTC m=+15.821342905" lastFinishedPulling="2024-12-13 13:28:26.114752076 +0000 UTC m=+23.376118799" observedRunningTime="2024-12-13 13:28:26.933287872 +0000 UTC m=+24.194654595" watchObservedRunningTime="2024-12-13 13:28:26.933695598 +0000 UTC m=+24.195062321" Dec 13 13:28:26.969650 systemd[1]: Started cri-containerd-12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17.scope - libcontainer container 12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17. Dec 13 13:28:26.989044 systemd[1]: cri-containerd-12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17.scope: Deactivated successfully. Dec 13 13:28:26.990543 containerd[1446]: time="2024-12-13T13:28:26.990509868Z" level=info msg="StartContainer for \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\" returns successfully" Dec 13 13:28:27.017507 containerd[1446]: time="2024-12-13T13:28:27.016315679Z" level=info msg="shim disconnected" id=12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17 namespace=k8s.io Dec 13 13:28:27.017507 containerd[1446]: time="2024-12-13T13:28:27.016386647Z" level=warning msg="cleaning up after shim disconnected" id=12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17 namespace=k8s.io Dec 13 13:28:27.017507 containerd[1446]: time="2024-12-13T13:28:27.016396248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:27.923635 kubelet[2613]: E1213 13:28:27.922879 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:27.923635 kubelet[2613]: E1213 13:28:27.923267 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:27.925712 containerd[1446]: time="2024-12-13T13:28:27.925625390Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:28:27.946853 containerd[1446]: time="2024-12-13T13:28:27.946803688Z" level=info msg="CreateContainer within sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\"" Dec 13 13:28:27.947480 containerd[1446]: time="2024-12-13T13:28:27.947438036Z" level=info msg="StartContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\"" Dec 13 13:28:27.978636 systemd[1]: Started cri-containerd-3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2.scope - libcontainer container 3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2. Dec 13 13:28:28.001910 containerd[1446]: time="2024-12-13T13:28:28.001827892Z" level=info msg="StartContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" returns successfully" Dec 13 13:28:28.097403 kubelet[2613]: I1213 13:28:28.097372 2613 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:28:28.121853 kubelet[2613]: I1213 13:28:28.121795 2613 topology_manager.go:215] "Topology Admit Handler" podUID="566b63f4-0edd-43f3-bab2-8d114d1e185a" podNamespace="kube-system" podName="coredns-76f75df574-6sdzx" Dec 13 13:28:28.124375 kubelet[2613]: I1213 13:28:28.124333 2613 topology_manager.go:215] "Topology Admit Handler" podUID="42486364-45bd-4d33-8116-51af068948ad" podNamespace="kube-system" podName="coredns-76f75df574-fz4tr" Dec 13 13:28:28.133231 systemd[1]: Created slice kubepods-burstable-pod566b63f4_0edd_43f3_bab2_8d114d1e185a.slice - libcontainer container kubepods-burstable-pod566b63f4_0edd_43f3_bab2_8d114d1e185a.slice. Dec 13 13:28:28.139753 systemd[1]: Created slice kubepods-burstable-pod42486364_45bd_4d33_8116_51af068948ad.slice - libcontainer container kubepods-burstable-pod42486364_45bd_4d33_8116_51af068948ad.slice. Dec 13 13:28:28.178715 kubelet[2613]: I1213 13:28:28.178633 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr7s\" (UniqueName: \"kubernetes.io/projected/566b63f4-0edd-43f3-bab2-8d114d1e185a-kube-api-access-rsr7s\") pod \"coredns-76f75df574-6sdzx\" (UID: \"566b63f4-0edd-43f3-bab2-8d114d1e185a\") " pod="kube-system/coredns-76f75df574-6sdzx" Dec 13 13:28:28.179069 kubelet[2613]: I1213 13:28:28.178845 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42486364-45bd-4d33-8116-51af068948ad-config-volume\") pod \"coredns-76f75df574-fz4tr\" (UID: \"42486364-45bd-4d33-8116-51af068948ad\") " pod="kube-system/coredns-76f75df574-fz4tr" Dec 13 13:28:28.179215 kubelet[2613]: I1213 13:28:28.179147 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/566b63f4-0edd-43f3-bab2-8d114d1e185a-config-volume\") pod \"coredns-76f75df574-6sdzx\" (UID: \"566b63f4-0edd-43f3-bab2-8d114d1e185a\") " pod="kube-system/coredns-76f75df574-6sdzx" Dec 13 13:28:28.179215 kubelet[2613]: I1213 13:28:28.179179 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpgd\" (UniqueName: \"kubernetes.io/projected/42486364-45bd-4d33-8116-51af068948ad-kube-api-access-bmpgd\") pod \"coredns-76f75df574-fz4tr\" (UID: \"42486364-45bd-4d33-8116-51af068948ad\") " pod="kube-system/coredns-76f75df574-fz4tr" Dec 13 13:28:28.439752 kubelet[2613]: E1213 13:28:28.438120 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:28.443147 kubelet[2613]: E1213 13:28:28.442897 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:28.443507 containerd[1446]: time="2024-12-13T13:28:28.443471765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fz4tr,Uid:42486364-45bd-4d33-8116-51af068948ad,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:28.454674 containerd[1446]: time="2024-12-13T13:28:28.454614406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sdzx,Uid:566b63f4-0edd-43f3-bab2-8d114d1e185a,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:28.927529 kubelet[2613]: E1213 13:28:28.927291 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:28.940138 kubelet[2613]: I1213 13:28:28.940092 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6mb22" podStartSLOduration=5.34186432 podStartE2EDuration="10.940046319s" podCreationTimestamp="2024-12-13 13:28:18 +0000 UTC" firstStartedPulling="2024-12-13 13:28:18.432923679 +0000 UTC m=+15.694290362" lastFinishedPulling="2024-12-13 13:28:24.031105678 +0000 UTC m=+21.292472361" observedRunningTime="2024-12-13 13:28:28.938730622 +0000 UTC m=+26.200097345" watchObservedRunningTime="2024-12-13 13:28:28.940046319 +0000 UTC m=+26.201413042" Dec 13 13:28:29.378977 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Dec 13 13:28:29.423675 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:29.424848 sshd-session[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:29.432990 systemd-logind[1423]: New session 8 of user core. Dec 13 13:28:29.442580 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:28:29.560697 sshd[3462]: Connection closed by 10.0.0.1 port 39960 Dec 13 13:28:29.561006 sshd-session[3460]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:29.564230 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:39960.service: Deactivated successfully. Dec 13 13:28:29.565895 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:28:29.566438 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:28:29.567183 systemd-logind[1423]: Removed session 8. Dec 13 13:28:29.928823 kubelet[2613]: E1213 13:28:29.928791 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:30.146993 systemd-networkd[1388]: cilium_host: Link UP Dec 13 13:28:30.147164 systemd-networkd[1388]: cilium_net: Link UP Dec 13 13:28:30.147307 systemd-networkd[1388]: cilium_net: Gained carrier Dec 13 13:28:30.147442 systemd-networkd[1388]: cilium_host: Gained carrier Dec 13 13:28:30.228582 systemd-networkd[1388]: cilium_vxlan: Link UP Dec 13 13:28:30.228589 systemd-networkd[1388]: cilium_vxlan: Gained carrier Dec 13 13:28:30.432642 systemd-networkd[1388]: cilium_host: Gained IPv6LL Dec 13 13:28:30.502537 kernel: NET: Registered PF_ALG protocol family Dec 13 13:28:30.930016 kubelet[2613]: E1213 13:28:30.929930 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:30.952622 systemd-networkd[1388]: cilium_net: Gained IPv6LL Dec 13 13:28:31.049093 systemd-networkd[1388]: lxc_health: Link UP Dec 13 13:28:31.057540 systemd-networkd[1388]: lxc_health: Gained carrier Dec 13 13:28:31.555651 systemd-networkd[1388]: lxc343f3d149491: Link UP Dec 13 13:28:31.581469 kernel: eth0: renamed from tmp02adc Dec 13 13:28:31.586419 systemd-networkd[1388]: lxc6f3f8201bbad: Link UP Dec 13 13:28:31.587586 kernel: eth0: renamed from tmpc68fb Dec 13 13:28:31.591664 systemd-networkd[1388]: lxc343f3d149491: Gained carrier Dec 13 13:28:31.592631 systemd-networkd[1388]: lxc6f3f8201bbad: Gained carrier Dec 13 13:28:32.040592 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Dec 13 13:28:32.367113 kubelet[2613]: E1213 13:28:32.366851 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:32.808587 systemd-networkd[1388]: lxc_health: Gained IPv6LL Dec 13 13:28:32.933214 kubelet[2613]: E1213 13:28:32.933184 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:32.936649 systemd-networkd[1388]: lxc343f3d149491: Gained IPv6LL Dec 13 13:28:33.320635 systemd-networkd[1388]: lxc6f3f8201bbad: Gained IPv6LL Dec 13 13:28:33.934470 kubelet[2613]: E1213 13:28:33.934435 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:34.577323 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). Dec 13 13:28:34.620531 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:34.621723 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:34.625305 systemd-logind[1423]: New session 9 of user core. Dec 13 13:28:34.635594 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:28:34.748887 sshd[3857]: Connection closed by 10.0.0.1 port 57150 Dec 13 13:28:34.749480 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:34.754128 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:57150.service: Deactivated successfully. Dec 13 13:28:34.756006 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:28:34.756905 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:28:34.760916 systemd-logind[1423]: Removed session 9. Dec 13 13:28:35.025911 containerd[1446]: time="2024-12-13T13:28:35.025594096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:35.025911 containerd[1446]: time="2024-12-13T13:28:35.025685063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:35.025911 containerd[1446]: time="2024-12-13T13:28:35.025715465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.025911 containerd[1446]: time="2024-12-13T13:28:35.025810033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.026817 containerd[1446]: time="2024-12-13T13:28:35.026116338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:35.026817 containerd[1446]: time="2024-12-13T13:28:35.026161461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:35.026817 containerd[1446]: time="2024-12-13T13:28:35.026184543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.026817 containerd[1446]: time="2024-12-13T13:28:35.026268870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.040716 systemd[1]: run-containerd-runc-k8s.io-02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0-runc.VxBJrp.mount: Deactivated successfully. Dec 13 13:28:35.050732 systemd[1]: Started cri-containerd-02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0.scope - libcontainer container 02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0. Dec 13 13:28:35.052018 systemd[1]: Started cri-containerd-c68fb05b15a45dcf836547a8fdc9270a21c83eb5f33392bc37fd25a72bb224af.scope - libcontainer container c68fb05b15a45dcf836547a8fdc9270a21c83eb5f33392bc37fd25a72bb224af. Dec 13 13:28:35.061066 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:35.064385 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:35.079392 containerd[1446]: time="2024-12-13T13:28:35.079354373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fz4tr,Uid:42486364-45bd-4d33-8116-51af068948ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0\"" Dec 13 13:28:35.080441 kubelet[2613]: E1213 13:28:35.080418 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:35.082980 containerd[1446]: time="2024-12-13T13:28:35.082894537Z" level=info msg="CreateContainer within sandbox \"02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:28:35.083562 containerd[1446]: time="2024-12-13T13:28:35.083478264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sdzx,Uid:566b63f4-0edd-43f3-bab2-8d114d1e185a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c68fb05b15a45dcf836547a8fdc9270a21c83eb5f33392bc37fd25a72bb224af\"" Dec 13 13:28:35.084702 kubelet[2613]: E1213 13:28:35.084677 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:35.086719 containerd[1446]: time="2024-12-13T13:28:35.086649599Z" level=info msg="CreateContainer within sandbox \"c68fb05b15a45dcf836547a8fdc9270a21c83eb5f33392bc37fd25a72bb224af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:28:35.107891 containerd[1446]: time="2024-12-13T13:28:35.107835900Z" level=info msg="CreateContainer within sandbox \"02adcbe6cef9370c2cdf10d997937dfb0b7049108fde5c0a2b9d25aa0b48b0d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"129e3485cb2b5105d3bf3e69559c5956479d88fa658d2bb02e22356ee6055ced\"" Dec 13 13:28:35.108351 containerd[1446]: time="2024-12-13T13:28:35.108317019Z" level=info msg="StartContainer for \"129e3485cb2b5105d3bf3e69559c5956479d88fa658d2bb02e22356ee6055ced\"" Dec 13 13:28:35.132620 systemd[1]: Started cri-containerd-129e3485cb2b5105d3bf3e69559c5956479d88fa658d2bb02e22356ee6055ced.scope - libcontainer container 129e3485cb2b5105d3bf3e69559c5956479d88fa658d2bb02e22356ee6055ced. Dec 13 13:28:35.143012 containerd[1446]: time="2024-12-13T13:28:35.142896356Z" level=info msg="CreateContainer within sandbox \"c68fb05b15a45dcf836547a8fdc9270a21c83eb5f33392bc37fd25a72bb224af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb2aa6d76e61d755123790941aec3a0d0f9dcaea8a04082c48af810aa3c33749\"" Dec 13 13:28:35.143661 containerd[1446]: time="2024-12-13T13:28:35.143565530Z" level=info msg="StartContainer for \"bb2aa6d76e61d755123790941aec3a0d0f9dcaea8a04082c48af810aa3c33749\"" Dec 13 13:28:35.158815 containerd[1446]: time="2024-12-13T13:28:35.158721987Z" level=info msg="StartContainer for \"129e3485cb2b5105d3bf3e69559c5956479d88fa658d2bb02e22356ee6055ced\" returns successfully" Dec 13 13:28:35.169662 systemd[1]: Started cri-containerd-bb2aa6d76e61d755123790941aec3a0d0f9dcaea8a04082c48af810aa3c33749.scope - libcontainer container bb2aa6d76e61d755123790941aec3a0d0f9dcaea8a04082c48af810aa3c33749. Dec 13 13:28:35.209944 containerd[1446]: time="2024-12-13T13:28:35.209898177Z" level=info msg="StartContainer for \"bb2aa6d76e61d755123790941aec3a0d0f9dcaea8a04082c48af810aa3c33749\" returns successfully" Dec 13 13:28:35.939222 kubelet[2613]: E1213 13:28:35.939141 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:35.942557 kubelet[2613]: E1213 13:28:35.942538 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:35.947737 kubelet[2613]: I1213 13:28:35.947541 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fz4tr" podStartSLOduration=17.947508053 podStartE2EDuration="17.947508053s" podCreationTimestamp="2024-12-13 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:35.947171946 +0000 UTC m=+33.208538669" watchObservedRunningTime="2024-12-13 13:28:35.947508053 +0000 UTC m=+33.208874776" Dec 13 13:28:35.957392 kubelet[2613]: I1213 13:28:35.957307 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6sdzx" podStartSLOduration=17.957257236 podStartE2EDuration="17.957257236s" podCreationTimestamp="2024-12-13 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:35.956978574 +0000 UTC m=+33.218345297" watchObservedRunningTime="2024-12-13 13:28:35.957257236 +0000 UTC m=+33.218623959" Dec 13 13:28:36.944433 kubelet[2613]: E1213 13:28:36.944352 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:36.944433 kubelet[2613]: E1213 13:28:36.944418 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:37.946094 kubelet[2613]: E1213 13:28:37.946001 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:37.946094 kubelet[2613]: E1213 13:28:37.946065 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:39.764298 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). Dec 13 13:28:39.815563 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:39.817516 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:39.821466 systemd-logind[1423]: New session 10 of user core. Dec 13 13:28:39.832682 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:28:39.970536 sshd[4041]: Connection closed by 10.0.0.1 port 57162 Dec 13 13:28:39.971196 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:39.974026 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:57162.service: Deactivated successfully. Dec 13 13:28:39.976262 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:28:39.977971 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:28:39.979902 systemd-logind[1423]: Removed session 10. Dec 13 13:28:44.983098 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:46530.service - OpenSSH per-connection server daemon (10.0.0.1:46530). Dec 13 13:28:45.024523 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 46530 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:45.025672 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:45.029320 systemd-logind[1423]: New session 11 of user core. Dec 13 13:28:45.037610 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:28:45.145483 sshd[4057]: Connection closed by 10.0.0.1 port 46530 Dec 13 13:28:45.145940 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:45.159938 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:46530.service: Deactivated successfully. Dec 13 13:28:45.161285 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:28:45.162518 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:28:45.163737 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Dec 13 13:28:45.164739 systemd-logind[1423]: Removed session 11. Dec 13 13:28:45.202265 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:45.203317 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:45.206935 systemd-logind[1423]: New session 12 of user core. Dec 13 13:28:45.218637 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:28:45.362046 sshd[4072]: Connection closed by 10.0.0.1 port 46540 Dec 13 13:28:45.362745 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:45.371130 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:46540.service: Deactivated successfully. Dec 13 13:28:45.374966 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:28:45.376297 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:28:45.386741 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:46546.service - OpenSSH per-connection server daemon (10.0.0.1:46546). Dec 13 13:28:45.387626 systemd-logind[1423]: Removed session 12. Dec 13 13:28:45.424238 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 46546 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:45.425348 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:45.428734 systemd-logind[1423]: New session 13 of user core. Dec 13 13:28:45.443578 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:28:45.554788 sshd[4087]: Connection closed by 10.0.0.1 port 46546 Dec 13 13:28:45.555134 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:45.558107 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:46546.service: Deactivated successfully. Dec 13 13:28:45.559694 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:28:45.560301 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:28:45.561414 systemd-logind[1423]: Removed session 13. Dec 13 13:28:50.568921 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:46552.service - OpenSSH per-connection server daemon (10.0.0.1:46552). Dec 13 13:28:50.608999 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 46552 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:50.610383 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:50.614502 systemd-logind[1423]: New session 14 of user core. Dec 13 13:28:50.620688 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:28:50.746487 sshd[4106]: Connection closed by 10.0.0.1 port 46552 Dec 13 13:28:50.745513 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:50.748365 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:46552.service: Deactivated successfully. Dec 13 13:28:50.749916 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:28:50.751796 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:28:50.752713 systemd-logind[1423]: Removed session 14. Dec 13 13:28:55.761160 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:53942.service - OpenSSH per-connection server daemon (10.0.0.1:53942). Dec 13 13:28:55.806044 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 53942 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:55.807198 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:55.810854 systemd-logind[1423]: New session 15 of user core. Dec 13 13:28:55.822625 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:28:55.953561 sshd[4121]: Connection closed by 10.0.0.1 port 53942 Dec 13 13:28:55.954679 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:55.961911 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:53942.service: Deactivated successfully. Dec 13 13:28:55.964062 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:28:55.966253 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:28:55.976784 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Dec 13 13:28:55.978651 systemd-logind[1423]: Removed session 15. Dec 13 13:28:56.013064 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:56.014091 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:56.017962 systemd-logind[1423]: New session 16 of user core. Dec 13 13:28:56.030619 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:28:56.297900 sshd[4136]: Connection closed by 10.0.0.1 port 53954 Dec 13 13:28:56.298971 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:56.311854 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:53954.service: Deactivated successfully. Dec 13 13:28:56.313396 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:28:56.315408 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:28:56.329725 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:53960.service - OpenSSH per-connection server daemon (10.0.0.1:53960). Dec 13 13:28:56.331416 systemd-logind[1423]: Removed session 16. Dec 13 13:28:56.381051 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 53960 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:56.382354 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:56.385901 systemd-logind[1423]: New session 17 of user core. Dec 13 13:28:56.395604 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:28:57.594102 sshd[4148]: Connection closed by 10.0.0.1 port 53960 Dec 13 13:28:57.596119 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:57.600958 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:53960.service: Deactivated successfully. Dec 13 13:28:57.604438 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:28:57.606193 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:28:57.620041 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Dec 13 13:28:57.624813 systemd-logind[1423]: Removed session 17. Dec 13 13:28:57.658386 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:57.659312 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:57.663381 systemd-logind[1423]: New session 18 of user core. Dec 13 13:28:57.672609 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:28:57.887604 sshd[4179]: Connection closed by 10.0.0.1 port 53966 Dec 13 13:28:57.888297 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:57.897315 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:53966.service: Deactivated successfully. Dec 13 13:28:57.899191 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:28:57.901639 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:28:57.909716 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Dec 13 13:28:57.911126 systemd-logind[1423]: Removed session 18. Dec 13 13:28:57.951706 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:57.953711 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:57.957429 systemd-logind[1423]: New session 19 of user core. Dec 13 13:28:57.973670 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:28:58.092626 sshd[4191]: Connection closed by 10.0.0.1 port 53980 Dec 13 13:28:58.092695 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:58.096224 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:53980.service: Deactivated successfully. Dec 13 13:28:58.100516 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:28:58.101320 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:28:58.102397 systemd-logind[1423]: Removed session 19. Dec 13 13:29:03.103203 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:32844.service - OpenSSH per-connection server daemon (10.0.0.1:32844). Dec 13 13:29:03.141822 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:03.142986 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:03.146417 systemd-logind[1423]: New session 20 of user core. Dec 13 13:29:03.154629 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:29:03.274073 sshd[4210]: Connection closed by 10.0.0.1 port 32844 Dec 13 13:29:03.274474 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:03.277561 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:32844.service: Deactivated successfully. Dec 13 13:29:03.279191 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:29:03.281302 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:29:03.283496 systemd-logind[1423]: Removed session 20. Dec 13 13:29:08.293532 systemd[1]: Started sshd@20-10.0.0.129:22-10.0.0.1:32856.service - OpenSSH per-connection server daemon (10.0.0.1:32856). Dec 13 13:29:08.332362 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 32856 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:08.333585 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:08.337402 systemd-logind[1423]: New session 21 of user core. Dec 13 13:29:08.346614 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:29:08.470526 sshd[4225]: Connection closed by 10.0.0.1 port 32856 Dec 13 13:29:08.470865 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:08.474327 systemd[1]: sshd@20-10.0.0.129:22-10.0.0.1:32856.service: Deactivated successfully. Dec 13 13:29:08.476098 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:29:08.477423 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:29:08.478417 systemd-logind[1423]: Removed session 21. Dec 13 13:29:13.482027 systemd[1]: Started sshd@21-10.0.0.129:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Dec 13 13:29:13.521610 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:13.522724 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:13.525981 systemd-logind[1423]: New session 22 of user core. Dec 13 13:29:13.535679 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:29:13.644700 sshd[4239]: Connection closed by 10.0.0.1 port 55464 Dec 13 13:29:13.645064 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:13.654814 systemd[1]: sshd@21-10.0.0.129:22-10.0.0.1:55464.service: Deactivated successfully. Dec 13 13:29:13.657720 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:29:13.658827 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:29:13.666773 systemd[1]: Started sshd@22-10.0.0.129:22-10.0.0.1:55478.service - OpenSSH per-connection server daemon (10.0.0.1:55478). Dec 13 13:29:13.667751 systemd-logind[1423]: Removed session 22. Dec 13 13:29:13.702674 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 55478 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:13.703742 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:13.707024 systemd-logind[1423]: New session 23 of user core. Dec 13 13:29:13.718639 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:29:15.487816 containerd[1446]: time="2024-12-13T13:29:15.487686756Z" level=info msg="StopContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" with timeout 30 (s)" Dec 13 13:29:15.489105 containerd[1446]: time="2024-12-13T13:29:15.488421611Z" level=info msg="Stop container \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" with signal terminated" Dec 13 13:29:15.504910 systemd[1]: cri-containerd-90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398.scope: Deactivated successfully. Dec 13 13:29:15.511109 systemd[1]: run-containerd-runc-k8s.io-3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2-runc.gvW3nb.mount: Deactivated successfully. Dec 13 13:29:15.523395 containerd[1446]: time="2024-12-13T13:29:15.523315696Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:29:15.529103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398-rootfs.mount: Deactivated successfully. Dec 13 13:29:15.535775 containerd[1446]: time="2024-12-13T13:29:15.535655488Z" level=info msg="shim disconnected" id=90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398 namespace=k8s.io Dec 13 13:29:15.535775 containerd[1446]: time="2024-12-13T13:29:15.535770524Z" level=warning msg="cleaning up after shim disconnected" id=90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398 namespace=k8s.io Dec 13 13:29:15.535775 containerd[1446]: time="2024-12-13T13:29:15.535780084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:15.540162 containerd[1446]: time="2024-12-13T13:29:15.540102381Z" level=info msg="StopContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" with timeout 2 (s)" Dec 13 13:29:15.540417 containerd[1446]: time="2024-12-13T13:29:15.540389011Z" level=info msg="Stop container \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" with signal terminated" Dec 13 13:29:15.546648 systemd-networkd[1388]: lxc_health: Link DOWN Dec 13 13:29:15.546990 systemd-networkd[1388]: lxc_health: Lost carrier Dec 13 13:29:15.571142 systemd[1]: cri-containerd-3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2.scope: Deactivated successfully. Dec 13 13:29:15.572389 systemd[1]: cri-containerd-3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2.scope: Consumed 6.232s CPU time. Dec 13 13:29:15.583023 containerd[1446]: time="2024-12-13T13:29:15.582972042Z" level=info msg="StopContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" returns successfully" Dec 13 13:29:15.586965 containerd[1446]: time="2024-12-13T13:29:15.586825874Z" level=info msg="StopPodSandbox for \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\"" Dec 13 13:29:15.586965 containerd[1446]: time="2024-12-13T13:29:15.586872553Z" level=info msg="Container to stop \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.588420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329-shm.mount: Deactivated successfully. Dec 13 13:29:15.590887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2-rootfs.mount: Deactivated successfully. Dec 13 13:29:15.594232 systemd[1]: cri-containerd-814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329.scope: Deactivated successfully. Dec 13 13:29:15.597077 containerd[1446]: time="2024-12-13T13:29:15.596886461Z" level=info msg="shim disconnected" id=3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2 namespace=k8s.io Dec 13 13:29:15.597077 containerd[1446]: time="2024-12-13T13:29:15.596935900Z" level=warning msg="cleaning up after shim disconnected" id=3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2 namespace=k8s.io Dec 13 13:29:15.597077 containerd[1446]: time="2024-12-13T13:29:15.596946659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:15.612896 containerd[1446]: time="2024-12-13T13:29:15.612858373Z" level=info msg="StopContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" returns successfully" Dec 13 13:29:15.613351 containerd[1446]: time="2024-12-13T13:29:15.613322157Z" level=info msg="StopPodSandbox for \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\"" Dec 13 13:29:15.613749 containerd[1446]: time="2024-12-13T13:29:15.613446153Z" level=info msg="Container to stop \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.613749 containerd[1446]: time="2024-12-13T13:29:15.613482832Z" level=info msg="Container to stop \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.613749 containerd[1446]: time="2024-12-13T13:29:15.613492472Z" level=info msg="Container to stop \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.613749 containerd[1446]: time="2024-12-13T13:29:15.613500711Z" level=info msg="Container to stop \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.613749 containerd[1446]: time="2024-12-13T13:29:15.613508791Z" level=info msg="Container to stop \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:29:15.617837 containerd[1446]: time="2024-12-13T13:29:15.617671733Z" level=info msg="shim disconnected" id=814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329 namespace=k8s.io Dec 13 13:29:15.617837 containerd[1446]: time="2024-12-13T13:29:15.617715172Z" level=warning msg="cleaning up after shim disconnected" id=814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329 namespace=k8s.io Dec 13 13:29:15.617837 containerd[1446]: time="2024-12-13T13:29:15.617723092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:15.619914 systemd[1]: cri-containerd-bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256.scope: Deactivated successfully. Dec 13 13:29:15.636954 containerd[1446]: time="2024-12-13T13:29:15.636909217Z" level=info msg="TearDown network for sandbox \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\" successfully" Dec 13 13:29:15.636954 containerd[1446]: time="2024-12-13T13:29:15.636943335Z" level=info msg="StopPodSandbox for \"814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329\" returns successfully" Dec 13 13:29:15.658112 containerd[1446]: time="2024-12-13T13:29:15.658049677Z" level=info msg="shim disconnected" id=bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256 namespace=k8s.io Dec 13 13:29:15.658112 containerd[1446]: time="2024-12-13T13:29:15.658103595Z" level=warning msg="cleaning up after shim disconnected" id=bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256 namespace=k8s.io Dec 13 13:29:15.658112 containerd[1446]: time="2024-12-13T13:29:15.658112395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:15.669855 containerd[1446]: time="2024-12-13T13:29:15.669736450Z" level=info msg="TearDown network for sandbox \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" successfully" Dec 13 13:29:15.669855 containerd[1446]: time="2024-12-13T13:29:15.669770969Z" level=info msg="StopPodSandbox for \"bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256\" returns successfully" Dec 13 13:29:15.742348 kubelet[2613]: I1213 13:29:15.741491 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-cilium-config-path\") pod \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\" (UID: \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\") " Dec 13 13:29:15.742348 kubelet[2613]: I1213 13:29:15.741539 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztc8t\" (UniqueName: \"kubernetes.io/projected/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-kube-api-access-ztc8t\") pod \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\" (UID: \"5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08\") " Dec 13 13:29:15.746987 kubelet[2613]: I1213 13:29:15.746946 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" (UID: "5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:29:15.751971 kubelet[2613]: I1213 13:29:15.751921 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-kube-api-access-ztc8t" (OuterVolumeSpecName: "kube-api-access-ztc8t") pod "5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" (UID: "5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08"). InnerVolumeSpecName "kube-api-access-ztc8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:29:15.843091 kubelet[2613]: I1213 13:29:15.843038 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.843176 kubelet[2613]: I1213 13:29:15.843110 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-xtables-lock\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843176 kubelet[2613]: I1213 13:29:15.843147 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-bpf-maps\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843176 kubelet[2613]: I1213 13:29:15.843172 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-config-path\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843254 kubelet[2613]: I1213 13:29:15.843189 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cni-path\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843254 kubelet[2613]: I1213 13:29:15.843207 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-net\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843254 kubelet[2613]: I1213 13:29:15.843253 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-kernel\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843317 kubelet[2613]: I1213 13:29:15.843277 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b4cc30c-e86e-42b9-a44a-84d99adf757f-clustermesh-secrets\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843317 kubelet[2613]: I1213 13:29:15.843296 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hubble-tls\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843317 kubelet[2613]: I1213 13:29:15.843313 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hostproc\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843377 kubelet[2613]: I1213 13:29:15.843330 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-cgroup\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843377 kubelet[2613]: I1213 13:29:15.843349 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-lib-modules\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843377 kubelet[2613]: I1213 13:29:15.843367 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-etc-cni-netd\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843437 kubelet[2613]: I1213 13:29:15.843388 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8f6x\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-kube-api-access-s8f6x\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843437 kubelet[2613]: I1213 13:29:15.843406 2613 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-run\") pod \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\" (UID: \"8b4cc30c-e86e-42b9-a44a-84d99adf757f\") " Dec 13 13:29:15.843523 kubelet[2613]: I1213 13:29:15.843441 2613 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.843523 kubelet[2613]: I1213 13:29:15.843480 2613 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ztc8t\" (UniqueName: \"kubernetes.io/projected/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08-kube-api-access-ztc8t\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.843523 kubelet[2613]: I1213 13:29:15.843491 2613 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.843588 kubelet[2613]: I1213 13:29:15.843516 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cni-path" (OuterVolumeSpecName: "cni-path") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.843588 kubelet[2613]: I1213 13:29:15.843546 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.843588 kubelet[2613]: I1213 13:29:15.843568 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.843588 kubelet[2613]: I1213 13:29:15.843571 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.844159 kubelet[2613]: I1213 13:29:15.843840 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.844159 kubelet[2613]: I1213 13:29:15.843878 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.844159 kubelet[2613]: I1213 13:29:15.843894 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.845565 kubelet[2613]: I1213 13:29:15.845531 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.846035 kubelet[2613]: I1213 13:29:15.845931 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hostproc" (OuterVolumeSpecName: "hostproc") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:29:15.846163 kubelet[2613]: I1213 13:29:15.846132 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:29:15.846470 kubelet[2613]: I1213 13:29:15.846390 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4cc30c-e86e-42b9-a44a-84d99adf757f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:29:15.847345 kubelet[2613]: I1213 13:29:15.847298 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:29:15.848306 kubelet[2613]: I1213 13:29:15.848277 2613 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-kube-api-access-s8f6x" (OuterVolumeSpecName: "kube-api-access-s8f6x") pod "8b4cc30c-e86e-42b9-a44a-84d99adf757f" (UID: "8b4cc30c-e86e-42b9-a44a-84d99adf757f"). InnerVolumeSpecName "kube-api-access-s8f6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:29:15.944611 kubelet[2613]: I1213 13:29:15.944583 2613 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944709 kubelet[2613]: I1213 13:29:15.944633 2613 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944709 kubelet[2613]: I1213 13:29:15.944656 2613 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944709 kubelet[2613]: I1213 13:29:15.944673 2613 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944709 kubelet[2613]: I1213 13:29:15.944691 2613 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944709 kubelet[2613]: I1213 13:29:15.944708 2613 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944725 2613 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b4cc30c-e86e-42b9-a44a-84d99adf757f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944742 2613 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944759 2613 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944775 2613 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944793 2613 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s8f6x\" (UniqueName: \"kubernetes.io/projected/8b4cc30c-e86e-42b9-a44a-84d99adf757f-kube-api-access-s8f6x\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944807 kubelet[2613]: I1213 13:29:15.944810 2613 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:15.944921 kubelet[2613]: I1213 13:29:15.944819 2613 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b4cc30c-e86e-42b9-a44a-84d99adf757f-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 13:29:16.019931 systemd[1]: Removed slice kubepods-burstable-pod8b4cc30c_e86e_42b9_a44a_84d99adf757f.slice - libcontainer container kubepods-burstable-pod8b4cc30c_e86e_42b9_a44a_84d99adf757f.slice. Dec 13 13:29:16.020434 systemd[1]: kubepods-burstable-pod8b4cc30c_e86e_42b9_a44a_84d99adf757f.slice: Consumed 6.361s CPU time. Dec 13 13:29:16.022899 kubelet[2613]: I1213 13:29:16.022773 2613 scope.go:117] "RemoveContainer" containerID="3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2" Dec 13 13:29:16.024867 containerd[1446]: time="2024-12-13T13:29:16.024630710Z" level=info msg="RemoveContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\"" Dec 13 13:29:16.026132 systemd[1]: Removed slice kubepods-besteffort-pod5c9eb5d5_05bc_49fb_a3f7_6b4b98e2fd08.slice - libcontainer container kubepods-besteffort-pod5c9eb5d5_05bc_49fb_a3f7_6b4b98e2fd08.slice. Dec 13 13:29:16.034493 containerd[1446]: time="2024-12-13T13:29:16.034429925Z" level=info msg="RemoveContainer for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" returns successfully" Dec 13 13:29:16.034969 kubelet[2613]: I1213 13:29:16.034941 2613 scope.go:117] "RemoveContainer" containerID="12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17" Dec 13 13:29:16.037244 containerd[1446]: time="2024-12-13T13:29:16.037200958Z" level=info msg="RemoveContainer for \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\"" Dec 13 13:29:16.040053 containerd[1446]: time="2024-12-13T13:29:16.040018631Z" level=info msg="RemoveContainer for \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\" returns successfully" Dec 13 13:29:16.040318 kubelet[2613]: I1213 13:29:16.040194 2613 scope.go:117] "RemoveContainer" containerID="cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75" Dec 13 13:29:16.074483 containerd[1446]: time="2024-12-13T13:29:16.074439158Z" level=info msg="RemoveContainer for \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\"" Dec 13 13:29:16.076683 containerd[1446]: time="2024-12-13T13:29:16.076649769Z" level=info msg="RemoveContainer for \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\" returns successfully" Dec 13 13:29:16.076941 kubelet[2613]: I1213 13:29:16.076831 2613 scope.go:117] "RemoveContainer" containerID="0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858" Dec 13 13:29:16.077830 containerd[1446]: time="2024-12-13T13:29:16.077807413Z" level=info msg="RemoveContainer for \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\"" Dec 13 13:29:16.081013 containerd[1446]: time="2024-12-13T13:29:16.080976474Z" level=info msg="RemoveContainer for \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\" returns successfully" Dec 13 13:29:16.081210 kubelet[2613]: I1213 13:29:16.081167 2613 scope.go:117] "RemoveContainer" containerID="2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601" Dec 13 13:29:16.082194 containerd[1446]: time="2024-12-13T13:29:16.082146438Z" level=info msg="RemoveContainer for \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\"" Dec 13 13:29:16.091504 containerd[1446]: time="2024-12-13T13:29:16.091440628Z" level=info msg="RemoveContainer for \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\" returns successfully" Dec 13 13:29:16.091676 kubelet[2613]: I1213 13:29:16.091642 2613 scope.go:117] "RemoveContainer" containerID="3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2" Dec 13 13:29:16.091883 containerd[1446]: time="2024-12-13T13:29:16.091839736Z" level=error msg="ContainerStatus for \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\": not found" Dec 13 13:29:16.098917 kubelet[2613]: E1213 13:29:16.098881 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\": not found" containerID="3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2" Dec 13 13:29:16.102454 kubelet[2613]: I1213 13:29:16.102421 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2"} err="failed to get container status \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f049790dac67a668481a54da77d9222e9f1ad855fc2d5777f65928a22f322a2\": not found" Dec 13 13:29:16.102510 kubelet[2613]: I1213 13:29:16.102471 2613 scope.go:117] "RemoveContainer" containerID="12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17" Dec 13 13:29:16.102735 containerd[1446]: time="2024-12-13T13:29:16.102701077Z" level=error msg="ContainerStatus for \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\": not found" Dec 13 13:29:16.102964 kubelet[2613]: E1213 13:29:16.102851 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\": not found" containerID="12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17" Dec 13 13:29:16.102964 kubelet[2613]: I1213 13:29:16.102884 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17"} err="failed to get container status \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\": rpc error: code = NotFound desc = an error occurred when try to find container \"12e2240e95efc9eefd74c6283f4cbf572acdda52e77d562eb4f973f039a97d17\": not found" Dec 13 13:29:16.102964 kubelet[2613]: I1213 13:29:16.102894 2613 scope.go:117] "RemoveContainer" containerID="cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75" Dec 13 13:29:16.103110 containerd[1446]: time="2024-12-13T13:29:16.103052466Z" level=error msg="ContainerStatus for \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\": not found" Dec 13 13:29:16.103208 kubelet[2613]: E1213 13:29:16.103183 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\": not found" containerID="cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75" Dec 13 13:29:16.103241 kubelet[2613]: I1213 13:29:16.103220 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75"} err="failed to get container status \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"cef788618bef82bd58618df1abc2613d4e626cbd70cc55c4b2b149126fb16a75\": not found" Dec 13 13:29:16.103241 kubelet[2613]: I1213 13:29:16.103231 2613 scope.go:117] "RemoveContainer" containerID="0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858" Dec 13 13:29:16.103428 containerd[1446]: time="2024-12-13T13:29:16.103400176Z" level=error msg="ContainerStatus for \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\": not found" Dec 13 13:29:16.103567 kubelet[2613]: E1213 13:29:16.103547 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\": not found" containerID="0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858" Dec 13 13:29:16.103595 kubelet[2613]: I1213 13:29:16.103578 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858"} err="failed to get container status \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c72fcfdcd7cb888aa36f0a35cc02540eb015accb09668f1d43b2e96ae9c9858\": not found" Dec 13 13:29:16.103595 kubelet[2613]: I1213 13:29:16.103588 2613 scope.go:117] "RemoveContainer" containerID="2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601" Dec 13 13:29:16.103798 containerd[1446]: time="2024-12-13T13:29:16.103771364Z" level=error msg="ContainerStatus for \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\": not found" Dec 13 13:29:16.103932 kubelet[2613]: E1213 13:29:16.103917 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\": not found" containerID="2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601" Dec 13 13:29:16.103969 kubelet[2613]: I1213 13:29:16.103942 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601"} err="failed to get container status \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\": rpc error: code = NotFound desc = an error occurred when try to find container \"2aedec996ed94c7596c26f2997043d98a30062984fde3b08805f2e3b14b18601\": not found" Dec 13 13:29:16.103969 kubelet[2613]: I1213 13:29:16.103953 2613 scope.go:117] "RemoveContainer" containerID="90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398" Dec 13 13:29:16.105032 containerd[1446]: time="2024-12-13T13:29:16.104957487Z" level=info msg="RemoveContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\"" Dec 13 13:29:16.107097 containerd[1446]: time="2024-12-13T13:29:16.107064621Z" level=info msg="RemoveContainer for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" returns successfully" Dec 13 13:29:16.107282 kubelet[2613]: I1213 13:29:16.107254 2613 scope.go:117] "RemoveContainer" containerID="90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398" Dec 13 13:29:16.107557 containerd[1446]: time="2024-12-13T13:29:16.107440010Z" level=error msg="ContainerStatus for \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\": not found" Dec 13 13:29:16.107656 kubelet[2613]: E1213 13:29:16.107637 2613 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\": not found" containerID="90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398" Dec 13 13:29:16.107686 kubelet[2613]: I1213 13:29:16.107671 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398"} err="failed to get container status \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\": rpc error: code = NotFound desc = an error occurred when try to find container \"90d1f8e3142cbae9cc79f1bacda30add603e6d72a2e3560c6eb0790c92d6f398\": not found" Dec 13 13:29:16.506752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814764b30af197be82bf68a3b1963a22a683cab96a6777bc8c6afbe05f553329-rootfs.mount: Deactivated successfully. Dec 13 13:29:16.506844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256-rootfs.mount: Deactivated successfully. Dec 13 13:29:16.506894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdd3ddf750d3726634cb41aaf8eec6c4d9bf3e5bc24cb0db5f49193a7f83c256-shm.mount: Deactivated successfully. Dec 13 13:29:16.506943 systemd[1]: var-lib-kubelet-pods-5c9eb5d5\x2d05bc\x2d49fb\x2da3f7\x2d6b4b98e2fd08-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztc8t.mount: Deactivated successfully. Dec 13 13:29:16.506993 systemd[1]: var-lib-kubelet-pods-8b4cc30c\x2de86e\x2d42b9\x2da44a\x2d84d99adf757f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8f6x.mount: Deactivated successfully. Dec 13 13:29:16.507037 systemd[1]: var-lib-kubelet-pods-8b4cc30c\x2de86e\x2d42b9\x2da44a\x2d84d99adf757f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:29:16.507083 systemd[1]: var-lib-kubelet-pods-8b4cc30c\x2de86e\x2d42b9\x2da44a\x2d84d99adf757f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:29:16.829670 kubelet[2613]: I1213 13:29:16.829578 2613 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" path="/var/lib/kubelet/pods/5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08/volumes" Dec 13 13:29:16.830003 kubelet[2613]: I1213 13:29:16.829983 2613 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" path="/var/lib/kubelet/pods/8b4cc30c-e86e-42b9-a44a-84d99adf757f/volumes" Dec 13 13:29:17.430590 sshd[4253]: Connection closed by 10.0.0.1 port 55478 Dec 13 13:29:17.430938 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:17.445858 systemd[1]: sshd@22-10.0.0.129:22-10.0.0.1:55478.service: Deactivated successfully. Dec 13 13:29:17.447287 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:29:17.447479 systemd[1]: session-23.scope: Consumed 1.090s CPU time. Dec 13 13:29:17.448507 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:29:17.449848 systemd[1]: Started sshd@23-10.0.0.129:22-10.0.0.1:55494.service - OpenSSH per-connection server daemon (10.0.0.1:55494). Dec 13 13:29:17.450792 systemd-logind[1423]: Removed session 23. Dec 13 13:29:17.500887 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 55494 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:17.501935 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:17.505770 systemd-logind[1423]: New session 24 of user core. Dec 13 13:29:17.511659 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:29:17.886199 kubelet[2613]: E1213 13:29:17.886158 2613 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:29:18.894919 sshd[4417]: Connection closed by 10.0.0.1 port 55494 Dec 13 13:29:18.895192 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:18.906788 systemd[1]: sshd@23-10.0.0.129:22-10.0.0.1:55494.service: Deactivated successfully. Dec 13 13:29:18.910953 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:29:18.911631 systemd[1]: session-24.scope: Consumed 1.305s CPU time. Dec 13 13:29:18.914165 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:29:18.915041 kubelet[2613]: I1213 13:29:18.913863 2613 topology_manager.go:215] "Topology Admit Handler" podUID="a1307b14-c341-49af-8f3c-d65a54f1b794" podNamespace="kube-system" podName="cilium-57x5p" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.914869 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="mount-cgroup" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.916210 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="apply-sysctl-overwrites" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.916219 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="mount-bpf-fs" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.916226 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" containerName="cilium-operator" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.916266 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="clean-cilium-state" Dec 13 13:29:18.916538 kubelet[2613]: E1213 13:29:18.916277 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="cilium-agent" Dec 13 13:29:18.920893 systemd[1]: Started sshd@24-10.0.0.129:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Dec 13 13:29:18.923360 kubelet[2613]: I1213 13:29:18.923338 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4cc30c-e86e-42b9-a44a-84d99adf757f" containerName="cilium-agent" Dec 13 13:29:18.923444 kubelet[2613]: I1213 13:29:18.923434 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9eb5d5-05bc-49fb-a3f7-6b4b98e2fd08" containerName="cilium-operator" Dec 13 13:29:18.926675 systemd-logind[1423]: Removed session 24. Dec 13 13:29:18.935428 systemd[1]: Created slice kubepods-burstable-poda1307b14_c341_49af_8f3c_d65a54f1b794.slice - libcontainer container kubepods-burstable-poda1307b14_c341_49af_8f3c_d65a54f1b794.slice. Dec 13 13:29:18.962840 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:18.964046 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:18.967542 systemd-logind[1423]: New session 25 of user core. Dec 13 13:29:18.976583 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:29:19.025370 sshd[4433]: Connection closed by 10.0.0.1 port 55502 Dec 13 13:29:19.024657 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:19.037602 systemd[1]: sshd@24-10.0.0.129:22-10.0.0.1:55502.service: Deactivated successfully. Dec 13 13:29:19.039078 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:29:19.040293 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:29:19.041545 systemd[1]: Started sshd@25-10.0.0.129:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). Dec 13 13:29:19.042367 systemd-logind[1423]: Removed session 25. Dec 13 13:29:19.062394 kubelet[2613]: I1213 13:29:19.062278 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1307b14-c341-49af-8f3c-d65a54f1b794-hubble-tls\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062394 kubelet[2613]: I1213 13:29:19.062341 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmpt\" (UniqueName: \"kubernetes.io/projected/a1307b14-c341-49af-8f3c-d65a54f1b794-kube-api-access-2dmpt\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062620 kubelet[2613]: I1213 13:29:19.062547 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-host-proc-sys-net\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062620 kubelet[2613]: I1213 13:29:19.062585 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-host-proc-sys-kernel\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062620 kubelet[2613]: I1213 13:29:19.062605 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-lib-modules\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062694 kubelet[2613]: I1213 13:29:19.062627 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-etc-cni-netd\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062694 kubelet[2613]: I1213 13:29:19.062647 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1307b14-c341-49af-8f3c-d65a54f1b794-cilium-ipsec-secrets\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062694 kubelet[2613]: I1213 13:29:19.062667 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-bpf-maps\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062694 kubelet[2613]: I1213 13:29:19.062686 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-cilium-cgroup\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062774 kubelet[2613]: I1213 13:29:19.062708 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1307b14-c341-49af-8f3c-d65a54f1b794-cilium-config-path\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062774 kubelet[2613]: I1213 13:29:19.062726 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1307b14-c341-49af-8f3c-d65a54f1b794-clustermesh-secrets\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062774 kubelet[2613]: I1213 13:29:19.062744 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-cni-path\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062774 kubelet[2613]: I1213 13:29:19.062762 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-cilium-run\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062855 kubelet[2613]: I1213 13:29:19.062782 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-hostproc\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.062855 kubelet[2613]: I1213 13:29:19.062801 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1307b14-c341-49af-8f3c-d65a54f1b794-xtables-lock\") pod \"cilium-57x5p\" (UID: \"a1307b14-c341-49af-8f3c-d65a54f1b794\") " pod="kube-system/cilium-57x5p" Dec 13 13:29:19.079937 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:29:19.081092 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:19.084465 systemd-logind[1423]: New session 26 of user core. Dec 13 13:29:19.093588 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:29:19.244227 kubelet[2613]: E1213 13:29:19.244130 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:19.245020 containerd[1446]: time="2024-12-13T13:29:19.244718752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57x5p,Uid:a1307b14-c341-49af-8f3c-d65a54f1b794,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:19.262832 containerd[1446]: time="2024-12-13T13:29:19.262683250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:19.262832 containerd[1446]: time="2024-12-13T13:29:19.262760728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:19.262832 containerd[1446]: time="2024-12-13T13:29:19.262772168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:19.262985 containerd[1446]: time="2024-12-13T13:29:19.262935484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:19.280625 systemd[1]: Started cri-containerd-43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969.scope - libcontainer container 43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969. Dec 13 13:29:19.299641 containerd[1446]: time="2024-12-13T13:29:19.299599781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57x5p,Uid:a1307b14-c341-49af-8f3c-d65a54f1b794,Namespace:kube-system,Attempt:0,} returns sandbox id \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\"" Dec 13 13:29:19.300268 kubelet[2613]: E1213 13:29:19.300248 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:19.302501 containerd[1446]: time="2024-12-13T13:29:19.302469347Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:29:19.313436 containerd[1446]: time="2024-12-13T13:29:19.313390867Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e\"" Dec 13 13:29:19.313878 containerd[1446]: time="2024-12-13T13:29:19.313836895Z" level=info msg="StartContainer for \"0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e\"" Dec 13 13:29:19.344655 systemd[1]: Started cri-containerd-0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e.scope - libcontainer container 0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e. Dec 13 13:29:19.367005 containerd[1446]: time="2024-12-13T13:29:19.366956530Z" level=info msg="StartContainer for \"0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e\" returns successfully" Dec 13 13:29:19.379295 systemd[1]: cri-containerd-0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e.scope: Deactivated successfully. Dec 13 13:29:19.408792 containerd[1446]: time="2024-12-13T13:29:19.408739736Z" level=info msg="shim disconnected" id=0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e namespace=k8s.io Dec 13 13:29:19.408792 containerd[1446]: time="2024-12-13T13:29:19.408790694Z" level=warning msg="cleaning up after shim disconnected" id=0b26dbb196da09fea90008e3d5f1048bb571d4ad9ff22fec385568ac48977b1e namespace=k8s.io Dec 13 13:29:19.408792 containerd[1446]: time="2024-12-13T13:29:19.408798574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:20.029179 kubelet[2613]: E1213 13:29:20.029133 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:20.031308 containerd[1446]: time="2024-12-13T13:29:20.031245307Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:29:20.041697 containerd[1446]: time="2024-12-13T13:29:20.041646497Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5\"" Dec 13 13:29:20.042891 containerd[1446]: time="2024-12-13T13:29:20.042394559Z" level=info msg="StartContainer for \"e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5\"" Dec 13 13:29:20.071629 systemd[1]: Started cri-containerd-e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5.scope - libcontainer container e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5. Dec 13 13:29:20.094678 containerd[1446]: time="2024-12-13T13:29:20.094569787Z" level=info msg="StartContainer for \"e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5\" returns successfully" Dec 13 13:29:20.105816 systemd[1]: cri-containerd-e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5.scope: Deactivated successfully. Dec 13 13:29:20.125289 containerd[1446]: time="2024-12-13T13:29:20.125154773Z" level=info msg="shim disconnected" id=e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5 namespace=k8s.io Dec 13 13:29:20.125289 containerd[1446]: time="2024-12-13T13:29:20.125264531Z" level=warning msg="cleaning up after shim disconnected" id=e6bc481b862f9e9be93eee5a85d244b0bdbe41db3fe30bf10c06597b8c4d84e5 namespace=k8s.io Dec 13 13:29:20.125730 containerd[1446]: time="2024-12-13T13:29:20.125275490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:21.034232 kubelet[2613]: E1213 13:29:21.034196 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:21.037710 containerd[1446]: time="2024-12-13T13:29:21.037598417Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:29:21.058780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694565018.mount: Deactivated successfully. Dec 13 13:29:21.065726 containerd[1446]: time="2024-12-13T13:29:21.065678670Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d\"" Dec 13 13:29:21.066332 containerd[1446]: time="2024-12-13T13:29:21.066205938Z" level=info msg="StartContainer for \"7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d\"" Dec 13 13:29:21.091624 systemd[1]: Started cri-containerd-7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d.scope - libcontainer container 7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d. Dec 13 13:29:21.123695 containerd[1446]: time="2024-12-13T13:29:21.123426260Z" level=info msg="StartContainer for \"7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d\" returns successfully" Dec 13 13:29:21.123862 systemd[1]: cri-containerd-7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d.scope: Deactivated successfully. Dec 13 13:29:21.167167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d-rootfs.mount: Deactivated successfully. Dec 13 13:29:21.172440 containerd[1446]: time="2024-12-13T13:29:21.172376806Z" level=info msg="shim disconnected" id=7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d namespace=k8s.io Dec 13 13:29:21.172570 containerd[1446]: time="2024-12-13T13:29:21.172442924Z" level=warning msg="cleaning up after shim disconnected" id=7bbc8cc9754d38a0ef2bf74e5e390b6fd9482650e49081f009568f2e252ad81d namespace=k8s.io Dec 13 13:29:21.172570 containerd[1446]: time="2024-12-13T13:29:21.172477883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:22.037959 kubelet[2613]: E1213 13:29:22.037934 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:22.040987 containerd[1446]: time="2024-12-13T13:29:22.040850664Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:29:22.054782 containerd[1446]: time="2024-12-13T13:29:22.054731616Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a\"" Dec 13 13:29:22.055227 containerd[1446]: time="2024-12-13T13:29:22.055197446Z" level=info msg="StartContainer for \"6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a\"" Dec 13 13:29:22.089643 systemd[1]: Started cri-containerd-6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a.scope - libcontainer container 6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a. Dec 13 13:29:22.110966 systemd[1]: cri-containerd-6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a.scope: Deactivated successfully. Dec 13 13:29:22.112562 containerd[1446]: time="2024-12-13T13:29:22.112527217Z" level=info msg="StartContainer for \"6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a\" returns successfully" Dec 13 13:29:22.133094 containerd[1446]: time="2024-12-13T13:29:22.132895995Z" level=info msg="shim disconnected" id=6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a namespace=k8s.io Dec 13 13:29:22.133094 containerd[1446]: time="2024-12-13T13:29:22.132950274Z" level=warning msg="cleaning up after shim disconnected" id=6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a namespace=k8s.io Dec 13 13:29:22.133094 containerd[1446]: time="2024-12-13T13:29:22.132960433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:22.167240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c121d22bf6919d3c3232cc82396360e1b0d9e03a98f792644a11d660a550d7a-rootfs.mount: Deactivated successfully. Dec 13 13:29:22.886908 kubelet[2613]: E1213 13:29:22.886879 2613 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:29:23.042673 kubelet[2613]: E1213 13:29:23.042069 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:23.045754 containerd[1446]: time="2024-12-13T13:29:23.045716169Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:29:23.064519 containerd[1446]: time="2024-12-13T13:29:23.064341171Z" level=info msg="CreateContainer within sandbox \"43804bdc0f5e93eabf30d2044147e1c419f03e86609d52341801d13c6106d969\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e541bf51c0822f81b935fdc36772815e10260b80d005eca769ff2e9692f8a4ea\"" Dec 13 13:29:23.065844 containerd[1446]: time="2024-12-13T13:29:23.065010359Z" level=info msg="StartContainer for \"e541bf51c0822f81b935fdc36772815e10260b80d005eca769ff2e9692f8a4ea\"" Dec 13 13:29:23.094681 systemd[1]: Started cri-containerd-e541bf51c0822f81b935fdc36772815e10260b80d005eca769ff2e9692f8a4ea.scope - libcontainer container e541bf51c0822f81b935fdc36772815e10260b80d005eca769ff2e9692f8a4ea. Dec 13 13:29:23.118929 containerd[1446]: time="2024-12-13T13:29:23.118879925Z" level=info msg="StartContainer for \"e541bf51c0822f81b935fdc36772815e10260b80d005eca769ff2e9692f8a4ea\" returns successfully" Dec 13 13:29:23.380555 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 13:29:24.049383 kubelet[2613]: E1213 13:29:24.049338 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:24.196373 kubelet[2613]: I1213 13:29:24.196242 2613 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:29:24Z","lastTransitionTime":"2024-12-13T13:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:29:25.245970 kubelet[2613]: E1213 13:29:25.245915 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.827007 kubelet[2613]: E1213 13:29:25.826974 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.827313 kubelet[2613]: E1213 13:29:25.827287 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:26.210933 systemd-networkd[1388]: lxc_health: Link UP Dec 13 13:29:26.217682 systemd-networkd[1388]: lxc_health: Gained carrier Dec 13 13:29:27.246498 kubelet[2613]: E1213 13:29:27.246407 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:27.273112 kubelet[2613]: I1213 13:29:27.272171 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-57x5p" podStartSLOduration=9.272131398 podStartE2EDuration="9.272131398s" podCreationTimestamp="2024-12-13 13:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:24.064880705 +0000 UTC m=+81.326247428" watchObservedRunningTime="2024-12-13 13:29:27.272131398 +0000 UTC m=+84.533498121" Dec 13 13:29:27.528606 systemd-networkd[1388]: lxc_health: Gained IPv6LL Dec 13 13:29:27.827922 kubelet[2613]: E1213 13:29:27.827353 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:28.056307 kubelet[2613]: E1213 13:29:28.056275 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:31.824506 sshd[4441]: Connection closed by 10.0.0.1 port 55514 Dec 13 13:29:31.825253 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:31.828347 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:29:31.829054 systemd[1]: sshd@25-10.0.0.129:22-10.0.0.1:55514.service: Deactivated successfully. Dec 13 13:29:31.830722 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:29:31.831630 systemd-logind[1423]: Removed session 26.